title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 7. Preparing network functions virtualization (NFV) | Chapter 7. Preparing network functions virtualization (NFV) If you use network functions virtualization (NFV), you must complete some preparation for the overcloud upgrade. 7.1. Network functions virtualization (NFV) environment files In a typical NFV-based environment, you can enable services such as the following: Single-root input/output virtualization (SR-IOV) Data Plane Development Kit (DPDK) You do not require any specific reconfiguration to these services to accommodate the upgrade to Red Hat OpenStack Platform 17.1. However, ensure that the environment files that enable your NFV functionality meet the following requirements: The default environment files to enable NFV features are located in the environments/services directory of the Red Hat OpenStack Platform 17.1 openstack-tripleo-heat-templates collection. If you include the default NFV environment files from openstack-tripleo-heat-templates with your Red Hat OpenStack Platform 16.2 deployment, verify the correct environment file location for the respective feature in Red Hat OpenStack Platform 17.1: Open vSwitch (OVS) networking and SR-IOV: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml Open vSwitch (OVS) networking and DPDK: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml To maintain OVS compatibility during the upgrade from Red Hat OpenStack Platform 16.2 to Red Hat OpenStack Platform 17.1, you must include the /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml environment file. When running deployment and upgrade commands that involve environment files, you must include any NFV-related environment files after the neutron-ovs.yaml file. For example, when running openstack overcloud upgrade prepare with OVS and NFV environment files, include the files in the following order: The OVS environment file The SR-IOV environment file The DPDK environment file Note There is a migration constraint for NFV workloads: you cannot live migrate instances from OVS-DPDK Compute nodes during an upgrade. Alternatively, you can cold migrate instances from OVS-DPDK Compute nodes during an upgrade. | [
"openstack overcloud upgrade prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/preparing-network-functions-virtualization-nfv |
7.240. yum | 7.240. yum 7.240.1. RHBA-2015:1384 - yum bug fix and enhancement update Updated yum package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. Yum is a utility that can check for and automatically download and install updated RPM packages. Dependencies are obtained and downloaded automatically, prompting the user for permission as necessary. Bug Fixes BZ# 893994 Yum has been updated to detect severity conflicts in the updateinfo.xml file. BZ# 905100 Previously, the "yum grouplist" command terminated unexpectedly with the "ValueError: unknown locale" message when a user-defined locale was specified on the system. With this update, "yum grouplist" has been modified to correctly process user-defined locale files, thus fixing this bug. BZ# 1016148 Under certain circumstances, when attempting to install locally stored packages, yum terminated with the following message: ValueError: your.rpm has no attribute basepath This bug has been fixed, and yum now installs local packages as expected. BZ# 1051931 Yum has been modified to properly notify the user if there is not enough space for the installed package in the installation destination. Now, the space required for the package is displayed correctly in MB or KB. BZ# 1076076 Prior to this update, yum did not show the echo output from the %postun RPM scriplet during package removal. This bug has been fixed, and the output is now displayed correctly. BZ# 1144503 Previously, the yum-plugin-downloadonly plug-in returned exit code 1 even when it executed successfully. The functionality of the plug-in has been incorporated into yum as the "--downloadonly" option. The "yum --downloadonly" command now returns the correct exit code on success. BZ# 1171543 The yum-plugin-security plug-in did not show any advisory if the architecture of the updated package changed. This bug has been fixed, and yum-plugin-security now works as expected. BZ# 1200159 Prior to this update, when epoch was defined in the rpm specification file of the kernel package, yum removed the running kernel package after updating. This bug has been fixed, and the running kernel is no longer removed in the described case. Enhancements BZ# 1154076 The "--exclude" option has been enhanced to exclude the already installed packages. BZ# 1136212 The "yum check" command has been enhanced to execute faster. BZ# 1174612 The "--assumeno" option has been backported to the yum package. Users of yum are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-yum |
Node APIs | Node APIs OpenShift Container Platform 4.14 Reference guide for node APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/node_apis/index |
Chapter 25. Automatic Bug Reporting Tool (ABRT) | Chapter 25. Automatic Bug Reporting Tool (ABRT) 25.1. Introduction to ABRT The Automatic Bug Reporting Tool , commonly abbreviated as ABRT , is a set of tools that is designed to help users detect and report application crashes. Its main purpose is to ease the process of reporting issues and finding solutions. In this context, the solution can be a Bugzilla ticket, a knowledge-base article, or a suggestion to update a package to a version containing a fix. ABRT consists of the abrtd daemon and a number of system services and utilities for processing, analyzing, and reporting detected problems. The daemon runs silently in the background most of the time and springs into action when an application crashes or a kernel oops is detected. The daemon then collects the relevant problem data, such as a core file if there is one, the crashing application's command line parameters, and other data of forensic utility. ABRT currently supports the detection of crashes in applications written in the C, C++, Java, Python, and Ruby programming languages, as well as X.Org crashes, kernel oopses, and kernel panics. See Section 25.4, "Detecting Software Problems" for more detailed information on the types of failures and crashes supported, and the way the various types of crashes are detected. The identified problems can be reported to a remote issue tracker, and the reporting can be configured to happen automatically whenever an issue is detected. Problem data can also be stored locally or on a dedicated system and reviewed, reported, and deleted manually by the user. The reporting tools can send problem data to a Bugzilla database or the Red Hat Technical Support (RHTSupport) website. The tools can also upload it using FTP or SCP , send it as an email, or write it to a file. The ABRT component that handles existing problem data (as opposed to, for example, the creation of new problem data) is a part of a separate project, libreport . The libreport library provides a generic mechanism for analyzing and reporting problems, and it is used by applications other than ABRT as well. However, ABRT and libreport operation and configuration are closely integrated. They are, therefore, discussed as one in this document. Note Note that ABRT report is generated only when core dump is generated. Core dump is generated only for some signals. For example, SIGKILL (-9) does not generate core dump, so ABRT cannot catch this fail. For more information about signals and core dump generating, see man 7 signal. 25.2. Installing ABRT and Starting its Services In order to use ABRT , ensure that the abrt-desktop or the abrt-cli package is installed on your system. The abrt-desktop package provides a graphical user interface for ABRT , and the abrt-cli package contains a tool for using ABRT on the command line. You can also install both. The general workflow with both the ABRT GUI and the command line tool is procedurally similar and follows the same pattern. Warning Please note that installing the ABRT packages overwrites the /proc/sys/kernel/core_pattern file, which can contain a template used to name core-dump files. The content of this file will be overwritten to: See Section 9.2.4, "Installing Packages" for general information on how to install packages using the Yum package manager. 25.2.1. Installing the ABRT GUI The ABRT graphical user interface provides an easy-to-use front end for working in a desktop environment. You can install the required package by running the following command as the root user: Upon installation, the ABRT notification applet is configured to start automatically when your graphical desktop session starts. You can verify that the ABRT applet is running by issuing the following command in a terminal: If the applet is not running, you can start it manually in your current desktop session by running the abrt-applet program: 25.2.2. Installing ABRT for the Command Line The command line interface is useful on headless machines, remote systems connected over a network, or in scripts. You can install the required package by running the following command as the root user: 25.2.3. Installing Supplementary ABRT Tools To receive email notifications about crashes detected by ABRT , you need to have the libreport-plugin-mailx package installed. You can install it by executing the following command as root : By default, it sends notifications to the root user at the local machine. The email destination can be configured in the /etc/libreport/plugins/mailx.conf file. To have notifications displayed in your console at login time, install the abrt-console-notification package as well. ABRT can detect, analyze, and report various types of software failures. By default, ABRT is installed with support for the most common types of failures, such as crashes of C and C++ applications. Support for other types of failures is provided by independent packages. For example, to install support for detecting exceptions in applications written using the Java language, run the following command as root : See Section 25.4, "Detecting Software Problems" for a list of languages and software projects which ABRT supports. The section also includes a list of all corresponding packages that enable the detection of the various types of failures. 25.2.4. Starting the ABRT Services The abrtd daemon requires the abrt user to exist for file system operations in the /var/spool/abrt directory. When the abrt package is installed, it automatically creates the abrt user whose UID and GID is 173, if such user does not already exist. Otherwise, the abrt user can be created manually. In that case, any UID and GID can be chosen, because abrtd does not require a specific UID and GID. The abrtd daemon is configured to start at boot time. You can use the following command to verify its current status: If systemctl returns inactive or unknown , the daemon is not running. You can start it for the current session by entering the following command as root : You can use the same commands to start or check status of related error-detection services. For example, make sure the abrt-ccpp service is running if you want ABRT to detect C or C++ crashes. See Section 25.4, "Detecting Software Problems" for a list of all available ABRT detection services and their respective packages. With the exception of the abrt-vmcore and abrt-pstoreoops services, which are only started when a kernel panic or kernel oops occurs, all ABRT services are automatically enabled and started at boot time when their respective packages are installed. You can disable or enable any ABRT service by using the systemctl utility as described in Chapter 10, Managing Services with systemd . 25.2.5. Testing ABRT Crash Detection To test that ABRT works properly, use the kill command to send the SEGV signal to terminate a process. For example, start a sleep process and terminate it with the kill command in the following way: ABRT detects a crash shortly after executing the kill command, and, provided a graphical session is running, the user is notified of the detected problem by the GUI notification applet. On the command line, you can check that the crash was detected by running the abrt-cli list command or by examining the crash dump created in the /var/spool/abrt/ directory. See Section 25.5, "Handling Detected Problems" for more information on how to work with detected crashes. 25.3. Configuring ABRT A problem life cycle is driven by events in ABRT . For example: Event #1 - a problem-data directory is created. Event #2 - problem data is analyzed. Event #3 - the problem is reported to Bugzilla. Whenever a problem is detected, ABRT compares it with all existing problem data and determines whether that same problem has already been recorded. If it has, the existing problem data is updated, and the most recent (duplicate) problem is not recorded again. If the problem is not recognized by ABRT , a problem-data directory is created. A problem-data directory typically consists of files such as: analyzer , architecture , coredump , cmdline , executable , kernel , os_release , reason , time , and uid . Other files, such as backtrace , can be created during the analysis of the problem, depending on which analyzer method is used and its configuration settings. Each of these files holds specific information about the system and the problem itself. For example, the kernel file records the version of a crashed kernel. After the problem-data directory is created and problem data gathered, you can process the problem using either the ABRT GUI, or the abrt-cli utility for the command line. See Section 25.5, "Handling Detected Problems" for more information about the ABRT tools provided for working with recorded problems. 25.3.1. Configuring Events ABRT events use plugins to carry out the actual reporting operations. Plugins are compact utilities that the events call to process the content of problem-data directories. Using plugins, ABRT is capable of reporting problems to various destinations, and almost every reporting destination requires some configuration. For instance, Bugzilla requires a user name, password, and a URL pointing to an instance of the Bugzilla service. Some configuration details can have default values (such as a Bugzilla URL), but others cannot have sensible defaults (for example, a user name). ABRT looks for these settings in configuration files, such as report_Bugzilla.conf , in the /etc/libreport/events/ or USDHOME/.cache/abrt/events/ directories for system-wide or user-specific settings respectively. The configuration files contain pairs of directives and values. These files are the bare minimum necessary for running events and processing the problem-data directories. The gnome-abrt and abrt-cli tools read the configuration data from these files and pass it to the events they run. Additional information about events (such as their description, names, types of parameters that can be passed to them as environment variables, and other properties) is stored in event_name .xml files in the /usr/share/libreport/events/ directory. These files are used by both gnome-abrt and abrt-cli to make the user interface more friendly. Do not edit these files unless you want to modify the standard installation. If you intend to do that, copy the file to be modified to the /etc/libreport/events/ directory and modify the new file. These files can contain the following information: a user-friendly event name and description (Bugzilla, Report to Bugzilla bug tracker), a list of items in a problem-data directory that are required for the event to succeed, a default and mandatory selection of items to send or not send, whether the GUI should prompt for data review, what configuration options exist, their types (string, Boolean, and so on), default value, prompt string, and so on; this lets the GUI build appropriate configuration dialogs. For example, the report_Logger event accepts an output filename as a parameter. Using the respective event_name .xml file, the ABRT GUI determines which parameters can be specified for a selected event and allows the user to set the values for these parameters. The values are saved by the ABRT GUI and reused on subsequent invocations of these events. Note that the ABRT GUI saves configuration options using the GNOME Keyring tool and by passing them to events, it overrides data from text configuration files. To open the graphical Configuration window, choose Automatic Bug Reporting Tool Preferences from within a running instance of the gnome-abrt application. This window shows a list of events that can be selected during the reporting process when using the GUI . When you select one of the configurable events, you can click the Configure button and modify the settings for that event. Figure 25.1. Configuring ABRT Events Important All files in the /etc/libreport/ directory hierarchy are world-readable and are meant to be used as global settings. Thus, it is not advisable to store user names, passwords, or any other sensitive data in them. The per-user settings (set in the GUI application and readable by the owner of USDHOME only) are safely stored in GNOME Keyring , or they can be stored in a text configuration file in USDHOME/.abrt/ for use with abrt-cli . The following table shows a selection of the default analyzing, collecting, and reporting events provided by the standard installation of ABRT . The table lists each event's name, identifier, configuration file from the /etc/libreport/events.d/ directory, and a brief description. Note that while the configuration files use the event identifiers, the ABRT GUI refers to the individual events using their names. Note also that not all of the events can be set up using the GUI . For information on how to define a custom event, see Section 25.3.2, "Creating Custom Events" . Table 25.1. Standard ABRT Events Name Identifier and Configuration File Description uReport report_uReport Uploads a mReport to the FAF server. Mailx report_Mailx mailx_event.conf Sends the problem report via the Mailx utility to a specified email address. Bugzilla report_Bugzilla bugzilla_event.conf Reports the problem to the specified installation of the Bugzilla bug tracker. Red Hat Customer Support report_RHTSupport rhtsupport_event.conf Reports the problem to the Red Hat Technical Support system. Analyze C or C++ Crash analyze_CCpp ccpp_event.conf Sends the core dump to a remote retrace server for analysis or performs a local analysis if the remote one fails. Report uploader report_Uploader uploader_event.conf Uploads a tarball ( .tar.gz ) archive with problem data to the chosen destination using the FTP or the SCP protocol. Analyze VM core analyze_VMcore vmcore_event.conf Runs the GDB (the GNU debugger) on the problem data of a kernel oops and generates a backtrace of the kernel. Local GNU Debugger analyze_LocalGDB ccpp_event.conf Runs GDB (the GNU debugger) on the problem data of an application and generates a backtrace of the program. Collect .xsession-errors analyze_xsession_errors ccpp_event.conf Saves relevant lines from the ~/.xsession-errors file to the problem report. Logger report_Logger print_event.conf Creates a problem report and saves it to a specified local file. Kerneloops.org report_Kerneloops koops_event.conf Sends a kernel problem to the oops tracker at kerneloops.org. 25.3.2. Creating Custom Events Each event is defined by one rule structure in a respective configuration file. The configuration files are typically stored in the /etc/libreport/events.d/ directory. These configuration files are loaded by the main configuration file, /etc/libreport/report_event.conf . There is no need to edit the default configuration files because abrt will run the scripts contained in /etc/libreport/events.d/ . This file accepts shell metacharacters (for example, *, USD, ?) and interprets relative paths relatively to its location. Each rule starts with a line with a non-space leading character, and all subsequent lines starting with the space character or the tab character are considered a part of this rule. Each rule consists of two parts, a condition part and a program part. The condition part contains conditions in one of the following forms: VAR = VAL VAR != VAL VAL ~= REGEX where: VAR is either the EVENT key word or a name of a problem-data directory element (such as executable , package , hostname , and so on), VAL is either a name of an event or a problem-data element, and REGEX is a regular expression. The program part consists of program names and shell-interpretable code. If all conditions in the condition part are valid, the program part is run in the shell. The following is an event example: This event would overwrite the contents of the /tmp/dt file with the current date and time and print the host name of the machine and its kernel version on the standard output. Here is an example of a more complex event, which is actually one of the predefined events. It saves relevant lines from the ~/.xsession-errors file to the problem report of any problem for which the abrt-ccpp service has been used, provided the crashed application had any X11 libraries loaded at the time of the crash: The set of possible events is not definitive. System administrators can add events according to their need in the /etc/libreport/events.d/ directory. Currently, the following event names are provided with the standard ABRT and libreport installations: post-create This event is run by abrtd to process newly created problem-data directories. When the post-create event is run, abrtd checks whether the new problem data matches any of the already existing problem directories. If such a problem directory exists, it is updated and the new problem data is discarded. Note that if the script in any definition of the post-create event exits with a non-zero value, abrtd will terminate the process and will drop the problem data. notify , notify-dup The notify event is run following the completion of post-create . When the event is run, the user can be sure that the problem deserves their attention. The notify-dup is similar, except it is used for duplicate occurrences of the same problem. analyze_ name_suffix where name_suffix is the replaceable part of the event name. This event is used to process collected data. For example, the analyze_LocalGDB event uses the GNU Debugger ( GDB ) utility to process the core dump of an application and produce a backtrace of the crash. collect_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to collect additional information on problems. report_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to report a problem. 25.3.3. Setting Up Automatic Reporting ABRT can be configured to send initial anonymous reports, or mReports , of any detected issues or crashes automatically without any user interaction. When automatic reporting is turned on, the so called mReport, which is normally sent at the beginning of the crash-reporting process, is sent immediately after a crash is detected. This prevents duplicate support cases based on identical crashes. To enable the autoreporting feature, issue the following command as root : The above command sets the AutoreportingEnabled directive in the /etc/abrt/abrt.conf configuration file to yes . This system-wide setting applies to all users of the system. Note that by enabling this option, automatic reporting will also be enabled in the graphical desktop environment. To only enable autoreporting in the ABRT GUI, switch the Automatically send uReport option to YES in the Problem Reporting Configuration window. To open this window, choose Automatic Bug Reporting Tool ABRT Configuration from within a running instance of the gnome-abrt application. To launch the application, go to Applications Sundry Automatic Bug Reporting Tool . Figure 25.2. Configuring ABRT Problem Reporting Upon detection of a crash, by default, ABRT submits a mReport with basic information about the problem to Red Hat's ABRT server. The server determines whether the problem is known and either provides a short description of the problem along with a URL of the reported case if known, or invites the user to report it if not known. Note A mReport (microreport) is a JSON object representing a problem, such as a binary crash or a kernel oops. These reports are designed to be brief, machine readable, and completely anonymous, which is why they can be used for automated reporting. The mReports make it possible to keep track of bug occurrences, but they usually do not provide enough information for engineers to fix the bug. A full bug report is needed for a support case to be opened. To change the default behavior of the autoreporting facility from sending a mReport, modify the value of the AutoreportingEvent directive in the /etc/abrt/abrt.conf configuration file to point to a different ABRT event. See Table 25.1, "Standard ABRT Events" for an overview of the standard events. 25.4. Detecting Software Problems ABRT is capable of detecting, analyzing, and processing crashes in applications written in a variety of different programming languages. Many of the packages that contain the support for detecting the various types of crashes are installed automatically when either one of the main ABRT packages ( abrt-desktop , abrt-cli ) is installed. See Section 25.2, "Installing ABRT and Starting its Services" for instructions on how to install ABRT . See the table below for a list of the supported types of crashes and the respective packages. Table 25.2. Supported Programming Languages and Software Projects Langauge/Project Package C or C++ abrt-addon-ccpp Python abrt-addon-python Ruby rubygem-abrt Java abrt-java-connector X.Org abrt-addon-xorg Linux (kernel oops) abrt-addon-kerneloops Linux (kernel panic) abrt-addon-vmcore Linux (persistent storage) abrt-addon-pstoreoops 25.4.1. Detecting C and C++ Crashes The abrt-ccpp service installs its own core-dump handler, which, when started, overrides the default value of the kernel's core_pattern variable, so that C and C++ crashes are handled by abrtd . If you stop the abrt-ccpp service, the previously specified value of core_pattern is reinstated. By default, the /proc/sys/kernel/core_pattern file contains the string core , which means that the kernel produces files with the core. prefix in the current directory of the crashed process. The abrt-ccpp service overwrites the core_pattern file to contain the following command: This command instructs the kernel to pipe the core dump to the abrt-hook-ccpp program, which stores it in ABRT 's dump location and notifies the abrtd daemon of the new crash. It also stores the following files from the /proc/ PID / directory (where PID is the ID of the crashed process) for debugging purposes: maps , limits , cgroup , status . See proc (5) for a description of the format and the meaning of these files. 25.4.2. Detecting Python Exceptions The abrt-addon-python package installs a custom exception handler for Python applications. The Python interpreter then automatically imports the abrt.pth file installed in /usr/lib64/python2.7/site-packages/ , which in turn imports abrt_exception_handler.py . This overrides Python's default sys.excepthook with a custom handler, which forwards unhandled exceptions to abrtd via its Socket API. To disable the automatic import of site-specific modules, and thus prevent the ABRT custom exception handler from being used when running a Python application, pass the -S option to the Python interpreter: In the above command, replace file.py with the name of the Python script you want to execute without the use of site-specific modules. 25.4.3. Detecting Ruby Exceptions The rubygem-abrt package registers a custom handler using the at_exit feature, which is executed when a program ends. This allows for checking for possible unhandled exceptions. Every time an unhandled exception is captured, the ABRT handler prepares a bug report, which can be submitted to Red Hat Bugzilla using standard ABRT tools. 25.4.4. Detecting Java Exceptions The ABRT Java Connector is a JVM agent that reports uncaught Java exceptions to abrtd . The agent registers several JVMTI event callbacks and has to be loaded into the JVM using the -agentlib command line parameter. Note that the processing of the registered callbacks negatively impacts the performance of the application. Use the following command to have ABRT catch exceptions from a Java class: In the above command, replace USDMyClass with the name of the Java class you want to test. By passing the abrt=on option to the connector, you ensure that the exceptions are handled by abrtd . In case you want to have the connector output the exceptions to standard output, omit this option. 25.4.5. Detecting X.Org Crashes The abrt-xorg service collects and processes information about crashes of the X.Org server from the /var/log/Xorg.0.log file. Note that no report is generated if a blacklisted X.org module is loaded. Instead, a not-reportable file is created in the problem-data directory with an appropriate explanation. You can find the list of offending modules in the /etc/abrt/plugins/xorg.conf file. Only proprietary graphics-driver modules are blacklisted by default. 25.4.6. Detecting Kernel Oopses and Panics By checking the output of kernel logs, ABRT is able to catch and process the so-called kernel oopses - non-fatal deviations from the correct behavior of the Linux kernel. This functionality is provided by the abrt-oops service. ABRT can also detect and process kernel panics - fatal, non-recoverable errors that require a reboot, using the abrt-vmcore service. The service only starts when a vmcore file (a kernel-core dump) appears in the /var/crash/ directory. When a core-dump file is found, abrt-vmcore creates a new problem-data directory in the /var/spool/abrt/ directory and copies the core-dump file to the newly created problem-data directory. After the /var/crash/ directory is searched, the service is stopped. For ABRT to be able to detect a kernel panic, the kdump service must be enabled on the system. The amount of memory that is reserved for the kdump kernel has to be set correctly. You can set it using the system-config-kdump graphical tool or by specifying the crashkernel parameter in the list of kernel options in the GRUB 2 menu. For details on how to enable and configure kdump , see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . For information on making changes to the GRUB 2 menu see Chapter 26, Working with GRUB 2 . Using the abrt-pstoreoops service, ABRT is capable of collecting and processing information about kernel panics, which, on systems that support pstore , is stored in the automatically-mounted /sys/fs/pstore/ directory. The platform-dependent pstore interface (persistent storage) provides a mechanism for storing data across system reboots, thus allowing for preserving kernel panic information. The service starts automatically when kernel crash-dump files appear in the /sys/fs/pstore/ directory. 25.5. Handling Detected Problems Problem data saved by abrtd can be viewed, reported, and deleted using either the command line tool, abrt-cli , or the graphical tool, gnome-abrt . Note Note that ABRT identifies duplicate problems by comparing new problems with all locally saved problems. For a repeating crash, ABRT requires you to act upon it only once. However, if you delete the crash dump of that problem, the time this specific problem occurs, ABRT will treat it as a new crash: ABRT will alert you about it, prompt you to fill in a description, and report it. To avoid having ABRT notifying you about a recurring problem, do not delete its problem data. 25.5.1. Using the Command Line Tool In the command line environment, the user is notified of new crashes on login, provided they have the abrt-console-notification package installed. The console notification looks like the following: To view detected problems, enter the abrt-cli list command: Each crash listed in the output of the abrt-cli list command has a unique identifier and a directory that can be used for further manipulation using abrt-cli . To view information about just one particular problem, use the abrt-cli info command: To increase the amount of information displayed when using both the list and info sub-commands, pass them the -d ( --detailed ) option, which shows all stored information about the problems listed, including respective backtrace files if they have already been generated. To analyze and report a certain problem, use the abrt-cli report command: Upon invocation of the above command, you will be asked to provide your credentials for opening a support case with Red Hat Customer Support. , abrt-cli opens a text editor with the content of the report. You can see what is being reported, and you can fill in instructions on how to reproduce the crash and other comments. You should also check the backtrace because the backtrace might be sent to a public server and viewed by anyone, depending on the problem-reporter event settings. Note You can choose which text editor is used to check the reports. abrt-cli uses the editor defined in the ABRT_EDITOR environment variable. If the variable is not defined, it checks the VISUAL and EDITOR variables. If none of these variables is set, the vi editor is used. You can set the preferred editor in your .bashrc configuration file. For example, if you prefer GNU Emacs , add the following line to the file: When you are done with the report, save your changes and close the editor. If you have reported your problem to the Red Hat Customer Support database, a problem case is filed in the database. From now on, you will be informed about the problem-resolution progress via email you provided during the process of reporting. You can also monitor the problem case using the URL that is provided to you when the problem case is created or via emails received from Red Hat Support. If you are certain that you do not want to report a particular problem, you can delete it. To delete a problem, so that ABRT does not keep information about it, use the command: To display help about a particular abrt-cli command, use the --help option: 25.5.2. Using the GUI The ABRT daemon broadcasts a D-Bus message whenever a problem report is created. If the ABRT applet is running in a graphical desktop environment, it catches this message and displays a notification dialog on the desktop. You can open the ABRT GUI using this dialog by clicking on the Report button. You can also open the ABRT GUI by selecting the Applications Sundry Automatic Bug Reporting Tool menu item. Alternatively, you can run the ABRT GUI from the command line as follows: The ABRT GUI window displays a list of detected problems. Each problem entry consists of the name of the failing application, the reason why the application crashed, and the date of the last occurrence of the problem. Figure 25.3. ABRT GUI To access a detailed problem description, double-click on a problem-report line or click on the Report button while the respective problem line is selected. You can then follow the instructions to proceed with the process of describing the problem, determining how it should be analyzed, and where it should be reported. To discard a problem, click on the Delete button. 25.6. Additional Resources For more information about ABRT and related topics, see the resources listed below. Installed Documentation abrtd (8) - The manual page for the abrtd daemon provides information about options that can be used with the daemon. abrt_event.conf (5) - The manual page for the abrt_event.conf configuration file describes the format of its directives and rules and provides reference information about event meta-data configuration in XML files. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on this system. Red Hat Enterprise Linux 7 Kernel Crash Dump Guide - The Kernel Crash Dump Guide for Red Hat Enterprise Linux 7 documents how to configure, test, and use the kdump crash recovery service and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility. See Also Chapter 23, Viewing and Managing Log Files describes the configuration of the rsyslog daemon and the systemd journal and explains how to locate, view, and monitor system logs. Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line. Chapter 10, Managing Services with systemd provides an introduction to systemd and documents how to use the systemctl command to manage system services, configure systemd targets, and execute power management commands. | [
"|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e",
"~]# yum install abrt-desktop",
"~]USD ps -el | grep abrt-applet 0 S 500 2036 1824 0 80 0 - 61604 poll_s ? 00:00:00 abrt-applet",
"~]USD abrt-applet & [1] 2261",
"~]# yum install abrt-cli",
"~]# yum install libreport-plugin-mailx",
"~]# yum install abrt-java-connector",
"~]USD systemctl is-active abrtd.service active",
"~]# systemctl start abrtd.service",
"~]USD sleep 100 & [1] 2823 ~]USD kill -s SIGSEGV 2823",
"EVENT=post-create date > /tmp/dt echo USDHOSTNAME uname -r",
"EVENT=analyze_xsession_errors analyzer=CCpp dso_list~=. /libX11. test -f ~/.xsession-errors || { echo \"No ~/.xsession-errors\"; exit 1; } test -r ~/.xsession-errors || { echo \"Can't read ~/.xsession-errors\"; exit 1; } executable= cat executable && base_executable=USD{executable##*/} && grep -F -e \"USDbase_executable\" ~/.xsession-errors | tail -999 >xsession_errors && echo \"Element 'xsession_errors' saved\"",
"{blank}",
"{blank}",
"~]# abrt-auto-reporting enabled",
"|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e",
"~]USD python -S file.py",
"~]USD java -agentlib:abrt-java-connector=abrt=on USDMyClass -platform.jvmtiSupported true",
"ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1398783164",
"~]USD abrt-cli list id 6734c6f1a1ed169500a7bfc8bd62aabaf039f9aa Directory: /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430 count: 1 executable: /usr/bin/sleep package: coreutils-8.22-11.el7 time: Mon 21 Apr 2014 09:47:51 AM EDT uid: 1000 Run 'abrt-cli report /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430' for creating a case in Red Hat Customer Portal",
"abrt-cli info -d directory_or_id",
"abrt-cli report directory_or_id",
"export VISUAL = emacs",
"abrt-cli rm directory_or_id",
"abrt-cli command --help",
"~]USD gnome-abrt &"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-abrt |
14.5. Manual ID Range Extension and Assigning a New ID Range | 14.5. Manual ID Range Extension and Assigning a New ID Range In certain situations, it is necessary to manually adjust an ID range: An assigned ID range has been depleted A replica has exhausted the ID range that was assigned to it, and requesting additional IDs failed because no more free IDs are available in the ID ranges of other replicas. You want to extend the ID range assigned to the replica. This might involve splitting an existing ID range or extending it past the initial configured ID range for the server. Alternatively, you might want to assign a new ID range. Note If you assign a new ID range, the UIDs of the already existing entries on the server or replica stay the same. This does not pose a problem because even if you change the current ID range, IdM keeps a record of what ranges were assigned in the past. A replica stopped functioning ID range is not automatically retrieved when a replica dies and needs to be deleted, which means the ID range previously assigned to the replica becomes unavailable. You want to recover the ID range and make it available for other replicas. If you want to recover the ID range belonging to a server that stopped functioning and assign it to another server, first find out what are the ID range values using the ipa-replica-manage dnarange-show command described in Section 14.3, "Displaying Currently Assigned ID Ranges" , and then manually assign that ID range to the server. Also, to avoid duplicate UIDs or GIDs, make sure that no ID value from the recovered range was previously assigned to a user or group; you can do this by examining the UIDs and GIDs of existent users and groups. To manually define the ID ranges, use the following two commands: ipa-replica-manage dnarange-set allows you to define the current ID range for a specified server: ipa-replica-manage dnanextrange-set allows you to define the ID range for a specified server: For more information about these commands, see the ipa-replica-manage (1) man page. Important Be careful not to create overlapping ID ranges. If any of the ID ranges you assign to servers or replicas overlap, it could result in two different servers assigning the same ID value to different entries. Do not set ID ranges that include UID values of 1000 and lower; these values are reserved for system use. Also, do not set an ID range that would include the 0 value; the SSSD service does not handle the 0 ID value. When extending an ID range manually, make sure that the newly extended range is included in the IdM ID range; you can check this using the ipa idrange-find command. Run the ipa idrange-find -h command to display help for how to use ipa idrange-find . | [
"ipa-replica-manage dnarange-set masterA.example.com 1250-1499",
"ipa-replica-manage dnanextrange-set masterB.example.com 1001-5000"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/man-set-extend-id-ranges |
10.2. Tuning Directory Server For Importing a Large Number of Entries | 10.2. Tuning Directory Server For Importing a Large Number of Entries When you import a large number of entries, operating system settings on the maximum number of user processes can limit the performance of Directory Server. To temporarily increase the maximum number of processes, enter: When a user logs off, the changes are back to the default settings. To permanently increase the maximum number of processes, see " How to set ulimit values " . | [
"ulimit -u 32000"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-directory-server-when-importing-a-large-number-of-entries |
Chapter 17. Network-Bound Disk Encryption (NBDE) | Chapter 17. Network-Bound Disk Encryption (NBDE) 17.1. About disk encryption technology Network-Bound Disk Encryption (NBDE) allows you to encrypt root volumes of hard drives on physical and virtual machines without having to manually enter a password when restarting machines. 17.1.1. Disk encryption technology comparison To understand the merits of Network-Bound Disk Encryption (NBDE) for securing data at rest on edge servers, compare key escrow and TPM disk encryption without Clevis to NBDE on systems running Red Hat Enterprise Linux (RHEL). The following table presents some tradeoffs to consider around the threat model and the complexity of each encryption solution. Scenario Key escrow TPM disk encryption (without Clevis) NBDE Protects against single-disk theft X X X Protects against entire-server theft X X Systems can reboot independently from the network X No periodic rekeying X Key is never transmitted over a network X X Supported by OpenShift X X 17.1.1.1. Key escrow Key escrow is the traditional system for storing cryptographic keys. The key server on the network stores the encryption key for a node with an encrypted boot disk and returns it when queried. The complexities around key management, transport encryption, and authentication do not make this a reasonable choice for boot disk encryption. Although available in Red Hat Enterprise Linux (RHEL), key escrow-based disk encryption setup and management is a manual process and not suited to OpenShift Container Platform automation operations, including automated addition of nodes, and currently not supported by OpenShift Container Platform. 17.1.1.2. TPM encryption Trusted Platform Module (TPM) disk encryption is best suited for data centers or installations in remote protected locations. Full disk encryption utilities such as dm-crypt and BitLocker encrypt disks with a TPM bind key, and then store the TPM bind key in the TPM, which is attached to the motherboard of the node. The main benefit of this method is that there is no external dependency, and the node is able to decrypt its own disks at boot time without any external interaction. TPM disk encryption protects against decryption of data if the disk is stolen from the node and analyzed externally. However, for insecure locations this may not be sufficient. For example, if an attacker steals the entire node, the attacker can intercept the data when powering on the node, because the node decrypts its own disks. This applies to nodes with physical TPM2 chips as well as virtual machines with Virtual Trusted Platform Module (VTPM) access. 17.1.1.3. Network-Bound Disk Encryption (NBDE) Network-Bound Disk Encryption (NBDE) effectively ties the encryption key to an external server or set of servers in a secure and anonymous way across the network. This is not a key escrow, in that the nodes do not store the encryption key or transfer it over the network, but otherwise behaves in a similar fashion. Clevis and Tang are generic client and server components that provide network-bound encryption. Red Hat Enterprise Linux CoreOS (RHCOS) uses these components in conjunction with Linux Unified Key Setup-on-disk-format (LUKS) to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. When a node starts, it attempts to contact a predefined set of Tang servers by performing a cryptographic handshake. If it can reach the required number of Tang servers, the node can construct its disk decryption key and unlock the disks to continue booting. If the node cannot access a Tang server due to a network outage or server unavailability, the node cannot boot and continues retrying indefinitely until the Tang servers become available again. Because the key is effectively tied to the node's presence in a network, an attacker attempting to gain access to the data at rest would need to obtain both the disks on the node, and network access to the Tang server as well. The following figure illustrates the deployment model for NBDE. The following figure illustrates NBDE behavior during a reboot. 17.1.1.4. Secret sharing encryption Shamir's secret sharing (sss) is a cryptographic algorithm to securely divide up, distribute, and re-assemble keys. Using this algorithm, OpenShift Container Platform can support more complicated mixtures of key protection. When you configure a cluster node to use multiple Tang servers, OpenShift Container Platform uses sss to set up a decryption policy that will succeed if at least one of the specified servers is available. You can create layers for additional security. For example, you can define a policy where OpenShift Container Platform requires both the TPM and one of the given list of Tang servers to decrypt the disk. 17.1.2. Tang server disk encryption The following components and technologies implement Network-Bound Disk Encryption (NBDE). Figure 17.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used for LUKS2 volumes. Tang is a server for binding data to network presence. It makes a node containing the data available when the node is bound to a certain secure network. Tang is stateless and does not require Transport Layer Security (TLS) or authentication. Unlike escrow-based solutions, where the key server stores all encryption keys and has knowledge of every encryption key, Tang never interacts with any node keys, so it never gains any identifying information from the node. Clevis is a pluggable framework for automated decryption that provides automated unlocking of Linux Unified Key Setup-on-disk-format (LUKS) volumes. The Clevis package runs on the node and provides the client side of the feature. A Clevis pin is a plugin into the Clevis framework. There are three pin types: TPM2 Binds the disk encryption to the TPM2. Tang Binds the disk encryption to a Tang server to enable NBDE. Shamir's secret sharing (sss) Allows more complex combinations of other pins. It allows more nuanced policies such as the following: Must be able to reach one of these three Tang servers Must be able to reach three of these five Tang servers Must be able to reach the TPM2 AND at least one of these three Tang servers 17.1.3. Tang server location planning When planning your Tang server environment, consider the physical and network locations of the Tang servers. Physical location The geographic location of the Tang servers is relatively unimportant, as long as they are suitably secured from unauthorized access or theft and offer the required availability and accessibility to run a critical service. Nodes with Clevis clients do not require local Tang servers as long as the Tang servers are available at all times. Disaster recovery requires both redundant power and redundant network connectivity to Tang servers regardless of their location. Network location Any node with network access to the Tang servers can decrypt their own disk partitions, or any other disks encrypted by the same Tang servers. Select network locations for the Tang servers that ensure the presence or absence of network connectivity from a given host allows for permission to decrypt. For example, firewall protections might be in place to prohibit access from any type of guest or public network, or any network jack located in an unsecured area of the building. Additionally, maintain network segregation between production and development networks. This assists in defining appropriate network locations and adds an additional layer of security. Do not deploy Tang servers on the same resource, for example, the same rolebindings.rbac.authorization.k8s.io cluster, that they are responsible for unlocking. However, a cluster of Tang servers and other security resources can be a useful configuration to enable support of multiple additional clusters and cluster resources. 17.1.4. Tang server sizing requirements The requirements around availability, network, and physical location drive the decision of how many Tang servers to use, rather than any concern over server capacity. Tang servers do not maintain the state of data encrypted using Tang resources. Tang servers are either fully independent or share only their key material, which enables them to scale well. There are two ways Tang servers handle key material: Multiple Tang servers share key material: You must load balance Tang servers sharing keys behind the same URL. The configuration can be as simple as round-robin DNS, or you can use physical load balancers. You can scale from a single Tang server to multiple Tang servers. Scaling Tang servers does not require rekeying or client reconfiguration on the node when the Tang servers share key material and the same URL. Client node setup and key rotation only requires one Tang server. Multiple Tang servers generate their own key material: You can configure multiple Tang servers at installation time. You can scale an individual Tang server behind a load balancer. All Tang servers must be available during client node setup or key rotation. When a client node boots using the default configuration, the Clevis client contacts all Tang servers. Only n Tang servers must be online to proceed with decryption. The default value for n is 1. Red Hat does not support postinstallation configuration that changes the behavior of the Tang servers. 17.1.5. Logging considerations Centralized logging of Tang traffic is advantageous because it might allow you to detect such things as unexpected decryption requests. For example: A node requesting decryption of a passphrase that does not correspond to its boot sequence A node requesting decryption outside of a known maintenance activity, such as cycling keys 17.2. Tang server installation considerations Network-Bound Disk Encryption (NBDE) must be enabled when a cluster node is installed. However, you can change the disk encryption policy at any time after it was initialized at installation. 17.2.1. Installation scenarios Consider the following recommendations when planning Tang server installations: Small environments can use a single set of key material, even when using multiple Tang servers: Key rotations are easier. Tang servers can scale easily to permit high availability. Large environments can benefit from multiple sets of key material: Physically diverse installations do not require the copying and synchronizing of key material between geographic regions. Key rotations are more complex in large environments. Node installation and rekeying require network connectivity to all Tang servers. A small increase in network traffic can occur due to a booting node querying all Tang servers during decryption. Note that while only one Clevis client query must succeed, Clevis queries all Tang servers. Further complexity: Additional manual reconfiguration can permit the Shamir's secret sharing (sss) of any N of M servers online in order to decrypt the disk partition. Decrypting disks in this scenario requires multiple sets of key material, and manual management of Tang servers and nodes with Clevis clients after the initial installation. High level recommendations: For a single RAN deployment, a limited set of Tang servers can run in the corresponding domain controller (DC). For multiple RAN deployments, you must decide whether to run Tang servers in each corresponding DC or whether a global Tang environment better suits the other needs and requirements of the system. 17.2.2. Installing a Tang server To deploy one or more Tang servers, you can choose from the following options depending on your scenario: Deploying a Tang server using the NBDE Tang Server Operator Deploying a Tang server with SELinux in enforcing mode on RHEL systems Configuring a Tang server in the RHEL web console Deploying Tang as a container Using the nbde_server System Role for setting up multiple Tang servers 17.2.2.1. Compute requirements The computational requirements for the Tang server are very low. Any typical server grade configuration that you would use to deploy a server into production can provision sufficient compute capacity. High availability considerations are solely for availability and not additional compute power to satisfy client demands. 17.2.2.2. Automatic start at boot Due to the sensitive nature of the key material the Tang server uses, you should keep in mind that the overhead of manual intervention during the Tang server's boot sequence can be beneficial. By default, if a Tang server starts and does not have key material present in the expected local volume, it will create fresh material and serve it. You can avoid this default behavior by either starting with pre-existing key material or aborting the startup and waiting for manual intervention. 17.2.2.3. HTTP versus HTTPS Traffic to the Tang server can be encrypted (HTTPS) or plaintext (HTTP). There are no significant security advantages of encrypting this traffic, and leaving it decrypted removes any complexity or failure conditions related to Transport Layer Security (TLS) certificate checking in the node running a Clevis client. While it is possible to perform passive monitoring of unencrypted traffic between the node's Clevis client and the Tang server, the ability to use this traffic to determine the key material is at best a future theoretical concern. Any such traffic analysis would require large quantities of captured data. Key rotation would immediately invalidate it. Finally, any threat actor able to perform passive monitoring has already obtained the necessary network access to perform manual connections to the Tang server and can perform the simpler manual decryption of captured Clevis headers. However, because other network policies in place at the installation site might require traffic encryption regardless of application, consider leaving this decision to the cluster administrator. Additional resources Configuring automated unlocking of encrypted volumes using policy-based decryption in the RHEL 8 Security hardening document Official Tang server container Encrypting and mirroring disks during installation 17.3. Tang server encryption key management The cryptographic mechanism to recreate the encryption key is based on the blinded key stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node's encrypted disk, periodic rekeying is advisable. You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. 17.3.1. Backing up keys for a Tang server The Tang server uses /usr/libexec/tangd-keygen to generate new keys and stores them in the /var/db/tang directory by default. To recover the Tang server in the event of a failure, back up this directory. The keys are sensitive and because they are able to perform the boot disk decryption of all hosts that have used them, the keys must be protected accordingly. Procedure Copy the backup key from the /var/db/tang directory to the temp directory from which you can restore the key. 17.3.2. Recovering keys for a Tang server You can recover the keys for a Tang server by accessing the keys from a backup. Procedure Restore the key from your backup folder to the /var/db/tang/ directory. When the Tang server starts up, it advertises and uses these restored keys. 17.3.3. Rekeying Tang servers This procedure uses a set of three Tang servers, each with unique keys, as an example. Using redundant Tang servers reduces the chances of nodes failing to boot automatically. Rekeying a Tang server, and all associated NBDE-encrypted nodes, is a three-step procedure. Prerequisites A working Network-Bound Disk Encryption (NBDE) installation on one or more nodes. Procedure Generate a new Tang server key. Rekey all NBDE-encrypted nodes so they use the new key. Delete the old Tang server key. Note Deleting the old key before all NBDE-encrypted nodes have completed their rekeying causes those nodes to become overly dependent on any other configured Tang servers. Figure 17.2. Example workflow for rekeying a Tang server 17.3.3.1. Generating a new Tang server key Prerequisites A root shell on the Linux machine running the Tang server. To facilitate verification of the Tang server key rotation, encrypt a small test file with the old key: # echo plaintext | clevis encrypt tang '{"url":"http://localhost:7500"}' -y >/tmp/encrypted.oldkey Verify that the encryption succeeded and the file can be decrypted to produce the same string plaintext : # clevis decrypt </tmp/encrypted.oldkey Procedure Locate and access the directory that stores the Tang server key. This is usually the /var/db/tang directory. Check the currently advertised key thumbprint: # tang-show-keys 7500 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8 Enter the Tang server key directory: # cd /var/db/tang/ List the current Tang server keys: # ls -A1 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk During normal Tang server operations, there are two .jwk files in this directory: one for signing and verification, and another for key derivation. Disable advertisement of the old keys: # for key in *.jwk; do \ mv -- "USDkey" ".USDkey"; \ done New clients setting up Network-Bound Disk Encryption (NBDE) or requesting keys will no longer see the old keys. Existing clients can still access and use the old keys until they are deleted. The Tang server reads but does not advertise keys stored in UNIX hidden files, which start with the . character. Generate a new key: # /usr/libexec/tangd-keygen /var/db/tang List the current Tang server keys to verify the old keys are no longer advertised, as they are now hidden files, and new keys are present: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Tang automatically advertises the new keys. Note More recent Tang server installations include a helper /usr/libexec/tangd-rotate-keys directory that takes care of disabling advertisement and generating the new keys simultaneously. If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made here are properly synchronized across the entire set of servers before proceeding. Verification Verify that the Tang server is advertising the new key, and not advertising the old key: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Verify that the old key, while not advertised, is still available to decryption requests: # clevis decrypt </tmp/encrypted.oldkey 17.3.3.2. Rekeying all NBDE nodes You can rekey all of the nodes on a remote cluster by using a DaemonSet object without incurring any downtime to the remote cluster. Note If a node loses power during the rekeying, it is possible that it might become unbootable, and must be redeployed via Red Hat Advanced Cluster Management (RHACM) or a GitOps pipeline. Prerequisites cluster-admin access to all clusters with Network-Bound Disk Encryption (NBDE) nodes. All Tang servers must be accessible to every NBDE node undergoing rekeying, even if the keys of a Tang server have not changed. Obtain the Tang server URL and key thumbprint for every Tang server. Procedure Create a DaemonSet object based on the following template. This template sets up three redundant Tang servers, but can be easily adapted to other situations. Change the Tang server URLs and thumbprints in the NEW_TANG_PIN environment to suit your environment: apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - "/sbin/chroot" - "/host" - "/bin/bash" - "-ec" args: - | rm -f /tmp/rekey-complete || true echo "Current tang pin:" clevis-luks-list -d USDROOT_DEV -s 1 echo "Applying new tang pin: USDNEW_TANG_PIN" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c "USDNEW_TANG_PIN" echo "Pin applied successfully" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon In this case, even though you are rekeying tangserver01 , you must specify not only the new thumbprint for tangserver01 , but also the current thumbprints for all other Tang servers. Failure to specify all thumbprints for a rekeying operation opens up the opportunity for a man-in-the-middle attack. To distribute the daemon set to every cluster that must be rekeyed, run the following command: USD oc apply -f tang-rekey.yaml However, to run at scale, wrap the daemon set in an ACM policy. This ACM configuration must contain one policy to deploy the daemon set, a second policy to check that all the daemon set pods are READY, and a placement rule to apply it to the appropriate set of clusters. Note After validating that the daemon set has successfully rekeyed all servers, delete the daemon set. If you do not delete the daemon set, it must be deleted before the rekeying operation. Verification After you distribute the daemon set, monitor the daemon sets to ensure that the rekeying has completed successfully. The script in the example daemon set terminates with an error if the rekeying failed, and remains in the CURRENT state if successful. There is also a readiness probe that marks the pod as READY when the rekeying has completed successfully. This is an example of the output listing for the daemon set before the rekeying has completed: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s This is an example of the output listing for the daemon set after the rekeying has completed successfully: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h Rekeying usually takes a few minutes to complete. Note If you use ACM policies to distribute the daemon sets to multiple clusters, you must include a compliance policy that checks every daemon set's READY count is equal to the DESIRED count. In this way, compliance to such a policy demonstrates that all daemon set pods are READY and the rekeying has completed successfully. You could also use an ACM search to query all of the daemon sets' states. 17.3.3.3. Troubleshooting temporary rekeying errors for Tang servers To determine if the error condition from rekeying the Tang servers is temporary, perform the following procedure. Temporary error conditions might include: Temporary network outages Tang server maintenance Generally, when these types of temporary error conditions occur, you can wait until the daemon set succeeds in resolving the error or you can delete the daemon set and not try again until the temporary error condition has been resolved. Procedure Restart the pod that performs the rekeying operation using the normal Kubernetes pod restart policy. If any of the associated Tang servers are unavailable, try rekeying until all the servers are back online. 17.3.3.4. Troubleshooting permanent rekeying errors for Tang servers If, after rekeying the Tang servers, the READY count does not equal the DESIRED count after an extended period of time, it might indicate a permanent failure condition. In this case, the following conditions might apply: A typographical error in the Tang server URL or thumbprint in the NEW_TANG_PIN definition. The Tang server is decommissioned or the keys are permanently lost. Prerequisites The commands shown in this procedure can be run on the Tang server or on any Linux system that has network access to the Tang server. Procedure Validate the Tang server configuration by performing a simple encrypt and decrypt operation on each Tang server's configuration as defined in the daemon set. This is an example of an encryption and decryption attempt with a bad thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver02:7500","thp":"badthumbprint"}' | \ clevis decrypt Example output Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'! This is an example of an encryption and decryption attempt with a good thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver03:7500","thp":"goodthumbprint"}' | \ clevis decrypt Example output okay After you identify the root cause, remedy the underlying situation: Delete the non-working daemon set. Edit the daemon set definition to fix the underlying issue. This might include any of the following actions: Edit a Tang server entry to correct the URL and thumbprint. Remove a Tang server that is no longer in service. Add a new Tang server that is a replacement for a decommissioned server. Distribute the updated daemon set again. Note When replacing, removing, or adding a Tang server from a configuration, the rekeying operation will succeed as long as at least one original server is still functional, including the server currently being rekeyed. If none of the original Tang servers are functional or can be recovered, recovery of the system is impossible and you must redeploy the affected nodes. Verification Check the logs from each pod in the daemon set to determine whether the rekeying completed successfully. If the rekeying is not successful, the logs might indicate the failure condition. Locate the name of the container that was created by the daemon set: USD oc get pods -A | grep tang-rekey Example output openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m Print the logs from the container. The following log is from a completed successful rekeying operation: USD oc logs tang-rekey-7ks6h Example output Current tang pin: 1: sss '{"t":1,"pins":{"tang":[{"url":"http://10.46.55.192:7500"},{"url":"http://10.46.55.192:7501"},{"url":"http://10.46.55.192:7502"}]}}' Applying new tang pin: {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} Updating binding... Binding edited successfully Pin applied successfully 17.3.4. Deleting old Tang server keys Prerequisites A root shell on the Linux machine running the Tang server. Procedure Locate and access the directory where the Tang server key is stored. This is usually the /var/db/tang directory: # cd /var/db/tang/ List the current Tang server keys, showing the advertised and unadvertised keys: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Delete the old keys: # rm .*.jwk List the current Tang server keys to verify the unadvertised keys are no longer present: # ls -A1 Example output Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Verification At this point, the server still advertises the new keys, but an attempt to decrypt based on the old key will fail. Query the Tang server for the current advertised key thumbprints: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Decrypt the test file created earlier to verify decryption against the old keys fails: # clevis decrypt </tmp/encryptValidation Example output Error communicating with the server! If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made are properly synchronized across the entire set of servers before proceeding. 17.4. Disaster recovery considerations This section describes several potential disaster situations and the procedures to respond to each of them. Additional situations will be added here as they are discovered or presumed likely to be possible. 17.4.1. Loss of a client machine The loss of a cluster node that uses the Tang server to decrypt its disk partition is not a disaster. Whether the machine was stolen, suffered hardware failure, or another loss scenario is not important: the disks are encrypted and considered unrecoverable. However, in the event of theft, a precautionary rotation of the Tang server's keys and rekeying of all remaining nodes would be prudent to ensure the disks remain unrecoverable even in the event the thieves subsequently gain access to the Tang servers. To recover from this situation, either reinstall or replace the node. 17.4.2. Planning for a loss of client network connectivity The loss of network connectivity to an individual node will cause it to become unable to boot in an unattended fashion. If you are planning work that might cause a loss of network connectivity, you can reveal the passphrase for an onsite technician to use manually, and then rotate the keys afterwards to invalidate it: Procedure Before the network becomes unavailable, show the password used in the first slot -s 1 of device /dev/vda2 with this command: USD sudo clevis luks pass -d /dev/vda2 -s 1 Invalidate that value and regenerate a new random boot-time passphrase with this command: USD sudo clevis luks regen -d /dev/vda2 -s 1 17.4.3. Unexpected loss of network connectivity If the network disruption is unexpected and a node reboots, consider the following scenarios: If any nodes are still online, ensure that they do not reboot until network connectivity is restored. This is not applicable for single-node clusters. The node will remain offline until such time that either network connectivity is restored, or a pre-established passphrase is entered manually at the console. In exceptional circumstances, network administrators might be able to reconfigure network segments to reestablish access, but this is counter to the intent of NBDE, which is that lack of network access means lack of ability to boot. The lack of network access at the node can reasonably be expected to impact that node's ability to function as well as its ability to boot. Even if the node were to boot via manual intervention, the lack of network access would make it effectively useless. 17.4.4. Recovering network connectivity manually A somewhat complex and manually intensive process is also available to the onsite technician for network recovery. Procedure The onsite technician extracts the Clevis header from the hard disks. Depending on BIOS lockdown, this might involve removing the disks and installing them in a lab machine. The onsite technician transmits the Clevis headers to a colleague with legitimate access to the Tang network who then performs the decryption. Due to the necessity of limited access to the Tang network, the technician should not be able to access that network via VPN or other remote connectivity. Similarly, the technician cannot patch the remote server through to this network in order to decrypt the disks automatically. The technician reinstalls the disk and manually enters the plain text passphrase provided by their colleague. The machine successfully starts even without direct access to the Tang servers. Note that the transmission of the key material from the install site to another site with network access must be done carefully. When network connectivity is restored, the technician rotates the encryption keys. 17.4.5. Emergency recovery of network connectivity If you are unable to recover network connectivity manually, consider the following steps. Be aware that these steps are discouraged if other methods to recover network connectivity are available. This method must only be performed by a highly trusted technician. Taking the Tang server's key material to the remote site is considered to be a breach of the key material and all servers must be rekeyed and re-encrypted. This method must be used in extreme cases only, or as a proof of concept recovery method to demonstrate its viability. Equally extreme, but theoretically possible, is to power the server in question with an Uninterruptible Power Supply (UPS), transport the server to a location with network connectivity to boot and decrypt the disks, and then restore the server at the original location on battery power to continue operation. If you want to use a backup manual passphrase, you must create it before the failure situation occurs. Just as attack scenarios become more complex with TPM and Tang compared to a stand-alone Tang installation, so emergency disaster recovery processes are also made more complex if leveraging the same method. 17.4.6. Loss of a network segment The loss of a network segment, making a Tang server temporarily unavailable, has the following consequences: OpenShift Container Platform nodes continue to boot as normal, provided other servers are available. New nodes cannot establish their encryption keys until the network segment is restored. In this case, ensure connectivity to remote geographic locations for the purposes of high availability and redundancy. This is because when you are installing a new node or rekeying an existing node, all of the Tang servers you are referencing in that operation must be available. A hybrid model for a vastly diverse network, such as five geographic regions in which each client is connected to the closest three clients is worth investigating. In this scenario, new clients are able to establish their encryption keys with the subset of servers that are reachable. For example, in the set of tang1 , tang2 and tang3 servers, if tang2 becomes unreachable clients can still establish their encryption keys with tang1 and tang3 , and at a later time re-establish with the full set. This can involve either a manual intervention or a more complex automation to be available. 17.4.7. Loss of a Tang server The loss of an individual Tang server within a load balanced set of servers with identical key material is completely transparent to the clients. The temporary failure of all Tang servers associated with the same URL, that is, the entire load balanced set, can be considered the same as the loss of a network segment. Existing clients have the ability to decrypt their disk partitions so long as another preconfigured Tang server is available. New clients cannot enroll until at least one of these servers comes back online. You can mitigate the physical loss of a Tang server by either reinstalling the server or restoring the server from backups. Ensure that the backup and restore processes of the key material is adequately protected from unauthorized access. 17.4.8. Rekeying compromised key material If key material is potentially exposed to unauthorized third parties, such as through the physical theft of a Tang server or associated data, immediately rotate the keys. Procedure Rekey any Tang server holding the affected material. Rekey all clients using the Tang server. Destroy the original key material. Scrutinize any incidents that result in unintended exposure of the master encryption key. If possible, take compromised nodes offline and re-encrypt their disks. Tip Reformatting and reinstalling on the same physical hardware, although slow, is easy to automate and test. | [
"echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey",
"clevis decrypt </tmp/encrypted.oldkey",
"tang-show-keys 7500",
"36AHjNH3NZDSnlONLz1-V4ie6t8",
"cd /var/db/tang/",
"ls -A1",
"36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk",
"for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done",
"/usr/libexec/tangd-keygen /var/db/tang",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encrypted.oldkey",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon",
"oc apply -f tang-rekey.yaml",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt",
"Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt",
"okay",
"oc get pods -A | grep tang-rekey",
"openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m",
"oc logs tang-rekey-7ks6h",
"Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully",
"cd /var/db/tang/",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"rm .*.jwk",
"ls -A1",
"Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encryptValidation",
"Error communicating with the server!",
"sudo clevis luks pass -d /dev/vda2 -s 1",
"sudo clevis luks regen -d /dev/vda2 -s 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/network-bound-disk-encryption-nbde |
14.8. Samba Distribution Programs | 14.8. Samba Distribution Programs 14.8.1. findsmb findsmb <subnet_broadcast_address> The findsmb program is a Perl script which reports information about SMB-aware systems on a specific subnet. If no subnet is specified the local subnet is used. Items displayed include IP address, NetBIOS name, workgroup or domain name, operating system, and version. The following example shows the output of executing findsmb as any valid user on a system: | [
"~]USD findsmb IP ADDR NETBIOS NAME WORKGROUP/OS/VERSION ------------------------------------------------------------------ 10.1.59.25 VERVE [MYGROUP] [Unix] [Samba 3.0.0-15] 10.1.59.26 STATION22 [MYGROUP] [Unix] [Samba 3.0.2-7.FC1] 10.1.56.45 TREK +[WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager] 10.1.57.94 PIXEL [MYGROUP] [Unix] [Samba 3.0.0-15] 10.1.57.137 MOBILE001 [WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager] 10.1.57.141 JAWS +[KWIKIMART] [Unix] [Samba 2.2.7a-security-rollup-fix] 10.1.56.159 FRED +[MYGROUP] [Unix] [Samba 3.0.0-14.3E] 10.1.59.192 LEGION *[MYGROUP] [Unix] [Samba 2.2.7-security-rollup-fix] 10.1.56.205 NANCYN +[MYGROUP] [Unix] [Samba 2.2.7a-security-rollup-fix]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-programs |
Chapter 3. Updating Red Hat build of OpenJDK container images | Chapter 3. Updating Red Hat build of OpenJDK container images To ensure that an Red Hat build of OpenJDK container with Java applications includes the latest security updates, rebuild the container. Procedure Pull the base Red Hat build of OpenJDK image. Deploy the Red Hat build of OpenJDK application. For more information, see Deploying Red Hat build of OpenJDK applications in containers . The Red Hat build of OpenJDK container with the Red Hat build of OpenJDK application is updated. Additional resources For more information, see Red Hat OpenJDK Container images . Revised on 2024-05-03 15:34:48 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/packaging_red_hat_build_of_openjdk_17_applications_in_containers/updating-openjdk-container-images |
Part I. Set Up a Cache Manager | Part I. Set Up a Cache Manager | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-Set_Up_a_Cache_Manager |
Chapter 6. Getting Started with OptaPlanner and Quarkus | Chapter 6. Getting Started with OptaPlanner and Quarkus You can use the https://code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. You can then download the Quarkus Maven repository or use the online Maven repository with your project. 6.1. Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. Maven repositories A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Quarkus projects or you can download the Red Hat build of Quarkus Maven repository. Maven plug-ins Maven plug-ins are defined parts of a POM file that achieve one or more goals. Quarkus applications use the following Maven plug-ins: Quarkus Maven plug-in ( quarkus-maven-plugin ): Enables Maven to create Quarkus projects, supports the generation of uber-JAR files, and provides a development mode. Maven Surefire plug-in ( maven-surefire-plugin ): Used during the test phase of the build lifecycle to execute unit tests on your application. The plug-in generates text and XML files that contain the test reports. 6.1.1. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 6.1.2. Downloading and configuring the Quarkus Maven repository If you do not want to use the online Maven repository, you can download and configure the Quarkus Maven repository to create a Quarkus application with Maven. The Quarkus Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the settings.xml file to configure the Quarkus Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Download the Red Hat build of Quarkus Maven repository ZIP file from the Software Downloads page of the Red Hat Customer Portal (login required). Expand the downloaded archive. Change directory to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the settings.xml file, where QUARKUS_MAVEN_REPOSITORY is the path of the Quarkus Maven repository that you downloaded. The format of QUARKUS_MAVEN_REPOSITORY must be file://USDPATH , for example file:///home/userX/rh-quarkus-2.13.GA-maven-repository/maven-repository . <!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where ARTIFACT_NAME is the name of a missing artifact and PROJECT_NAME is the name of the project you are trying to build: Missing artifact PROJECT_NAME [ERROR] Failed to execute goal on project ARTIFACT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts. 6.2. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 6.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 6.3. Creating a Red Hat build of Quarkus Maven project using code.quarkus.redhat.com You can use the code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters required to compile your project into a native executable. This section walks you through the process of generating an OptaPlanner Maven project and includes the following topics: Specifying basic details about your application. Choosing the extensions that you want to include in your project. Generating a downloadable archive with your project files. Using the custom commands for compiling and starting your application. Prerequisites You have a web browser. Procedure Open https://code.quarkus.redhat.com in your web browser: Specify details about your project: Enter a group name for your project. The format of the name follows the Java package naming convention, for example, com.example . Enter a name that you want to use for Maven artifacts generated from your project, for example code-with-quarkus . Select Build Tool > Maven to specify that you want to create a Maven project. The build tool that you choose determines the items: The directory structure of your generated project The format of configuration files used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create OptaPlanner Maven projects only. Generating Gradle projects is not supported by Red Hat. Enter a version to be used in artifacts generated from your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended, but you can use a different type of versioning if you prefer. Enter the package name of artifacts that the build tool generates when you package your project. According to the Java package naming conventions the package name should match the group name that you use for your project, but you can specify a different name. Note The code.quarkus.redhat.com website automatically uses the latest release of OptaPlanner. You can manually change the BOM version in the pom.xml file after you generate your project. Select the following extensions to include as dependencies: RESTEasy JAX-RS (quarkus-resteasy) RESTEasy Jackson (quarkus-resteasy-jackson) OptaPlanner AI constraint solver(optaplanner-quarkus) OptaPlanner Jackson (optaplanner-quarkus-jackson) Red Hat provides different levels of support for individual extensions on the list, which are indicated by labels to the name of each extension: SUPPORTED extensions are fully supported by Red Hat for use in enterprise applications in production environments. TECH-PREVIEW extensions are subject to limited support by Red Hat in production environments under the Technology Preview Features Support Scope . DEV-SUPPORT extensions are not supported by Red Hat for use in production environments, but the core functionalities that they provide are supported by Red Hat developers for use in developing new applications. DEPRECATED extension are planned to be replaced with a newer technology or implementation that provides the same functionality. Unlabeled extensions are not supported by Red Hat for use in production environments. Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. The overlay screen also shows the custom command that you can use to compile and start your application. Select Download the ZIP to save the archive with the generated project files to your system. Extract the contents of the archive. Navigate to the directory that contains your extracted project files: cd <directory_name> Compile and start your application in development mode: ./mvnw compile quarkus:dev 6.4. Creating a Red Hat build of Quarkus Maven project using the Quarkus CLI You can use the Quarkus command line interface (CLI) to create a Quarkus OptaPlanner project. Prerequisites You have installed the Quarkus CLI. For information, see Building Quarkus Apps with Quarkus Command Line Interface . Procedure Create a Quarkus application: To view the available extensions, enter the following command: This command returns the following extensions: Enter the following command to add extensions to the project's pom.xml file: Open the pom.xml file in a text editor. The contents of the file should look similar to the following example: | [
"<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"<!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"mvn --version",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples",
"cd optaplanner-quickstart",
"<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>",
"cd <directory_name>",
"./mvnw compile quarkus:dev",
"quarkus create app -P io.quarkus:quarkus-bom:2.13.Final-redhat-00006",
"quarkus ext -i",
"optaplanner-quarkus optaplanner-quarkus-benchmark optaplanner-quarkus-jackson optaplanner-quarkus-jsonb",
"quarkus ext add resteasy-jackson quarkus ext add optaplanner-quarkus quarkus ext add optaplanner-quarkus-jackson",
"<?xml version=\"1.0\"?> <project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>code-with-quarkus-optaplanner</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <compiler-plugin.version>3.8.1</compiler-plugin.version> <maven.compiler.parameters>true</maven.compiler.parameters> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.version>2.13.Final-redhat-00006</quarkus.platform.version> <surefire-plugin.version>3.0.0-M5</surefire-plugin.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>optaplanner-quarkus</artifactId> <version>2.2.2.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-arc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <parameters>USD{maven.compiler.parameters}</parameters> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <properties> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> </project>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/optaplanner-quarkus-con_getting-started-optaplanner |
Chapter 13. Servers and Services | Chapter 13. Servers and Services The ErrorPolicy directive is now validated The ErrorPolicy configuration directive was not validated on startup, and an unintended default error policy could be used without a warning. The directive is now validated on startup and reset to the default if the configured value is incorrect. The intended policy is used, or a warning message is logged. CUPS now disables SSLv3 encryption by default Previously, it was not possible to disable SSLv3 encryption in the CUPS scheduler, which left it vulnerable to attacks against SSLv3. To solve this issue, the cupsd.conf SSLOptions keyword has been extended to include two new options, AllowRC4 and AllowSSL3 , each of which enables the named feature in cupsd . The new options are also supported in the /etc/cups/client.conf file. The default is now to disable both RC4 and SSL3 for cupsd . cups now allows underscore in printer names The cups service now allows users to include the underscore character (_) in local printer names. Unneeded dependency removed from the tftp-server package Previously, an additional package was installed by default when installing the tftp-server package. With this update, the superfluous package dependency has been removed, and the unneeded package is no longer installed by default when installing tftp-server . The deprecated /etc/sysconfig/conman file has been removed Before introducing the systemd manager, various limits for services could be configured in the /etc/sysconfig/conman file. After migrating to systemd , /etc/sysconfig/conman is no longer used and therefore it was removed. To set limits and other daemon parameters, such as LimitCPU=, LimitDATA=, or LimitCORE=, edit the conman.service file. For more information, see the systemd.exec(5) manual page. In addition, a new variable LimitNOFILE=10000 has been added to the systemd.service file. This variable is commented out by default. Note that after making any changes to the systemd configuration, the systemctl daemon-reload command must be executed for changes to take effect. mod_nss rebase to version 1.0.11 The mod_nss packages have been upgraded to upstream version 1.0.11, which provides a number of bug fixes and enhancements over the version. Notably, mod_nss can now enable TLSv1.2, and SSLv2 has been completely removed. Also, support for the ciphers generally considered to be most secure has been added. The vsftpd daemon now supports DHE and ECDHE cipher suites The vsftpd daemon now supports cipher suites based on the Diffie-Hellman Exchange (DHE) and Elliptic Curve Diffie-Hellman Exchange (ECDHE) key-exchange protocol. Permissions can now be set for files uploaded with sftp Inconsistent user environments and strict umask settings could result in inaccessible files when uploading using the sftp utility. With this update, the administrator is able to force exact permissions for files uploaded using sftp , thus avoiding the described issue. LDAP queries used by ssh-ldap-helper can now be adjusted Not all LDAP servers use a default schema as expected by the ssh-ldap-helper tool. This update makes it possible for the administrator to adjust the LDAP query used by ssh-ldap-helper to get public keys from servers using a different schema. Default functionality stays untouched. A new createolddir directive in the logrotate utility A new logrotate createolddir directive has been added to enable automatic creation of the olddir directory. For more information, see the logrotate(8) manual page. Error messages from /etc/cron.daily/logrotate are no longer redirected to /dev/null Error messages generated by the daily cronjob of logrotate are now sent to the root user instead of being silently discarded. In addition, the /etc/cron.daily/logrotate script is marked as a configuration file in RPM. SEED and IDEA based algorithms restricted in mod_ssl The set of cipher suites enabled by default in the mod_ssl module of the Apache HTTP Server has been restricted to improve security. SEED and IDEA based encryption algorithms are no longer enabled in the default configuration of mod_ssl . Apache HTTP Server now supports UPN Names stored in the subject alternative name portion of SSL/TLS client certificates, such as the Microsoft User Principle Name, can now be used from the SSLUserName directive and are now available in mod_ssl environment variables. Users can now authenticate with their Common Access Card (CAC) or certificate with a UPN in it, and have their UPN used as authenticated user information, consumed by both the access control in Apache and using the REMOTE_USER environment variable or a similar mechanism in applications. As a result, users can now set SSLUserName SSL_CLIENT_SAN_OTHER_msUPN_0 for authentication using UPN. The mod_dav lock database is now enabled by default in the mod_dav_fs module The mod_dav lock database is now enabled by default if the Apache HTTP mod_dav_fs module is loaded. The default location ServerRoot/davlockdb can be overridden using the DAVLockDB configuration directive. mod_proxy_wstunnel now supports WebSockets The Apache HTTP mod_proxy_wstunnel module is now enabled by default and it includes support for SSL connections in the wss:// scheme. Additionally, it is possible to use the ws:// scheme in the mod_rewrite directives. This allows for using WebSockets as a target to mod_rewrite and enabling WebSockets in the proxy module. A Tuned profile optimized for Oracle database servers has been included A new oracle Tuned profile, which is specifically optimized for the Oracle databases load, is now available. The new profile is delivered in the tuned-profiles-oracle subpackage, so that other related profiles can be added in the future. The oracle profile is based on the enterprise-storage profile, but modifies kernel parameters based on Oracle database requirements and turns transparent huge pages off. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/servers_and_services |
Chapter 311. SNMP Component | Chapter 311. SNMP Component Available as of Camel version 2.1 The snmp: component gives you the ability to poll SNMP capable devices or receiving traps Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-snmp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 311.1. URI format The component supports polling OID values from an SNMP enabled device and receiving traps. You can append query options to the URI in the following format, ?option=value&option=value&... 311.2. Snmp Producer Available from 2.18 release It can also be used to request information using GET method. The response body type is org.apache.camel.component.snmp.SnmpMessage 311.3. Options The SNMP component has no options. The SNMP endpoint is configured using URI syntax: with the following path and query parameters: 311.3.1. Path Parameters (2 parameters): Name Description Default Type host Required Hostname of the SNMP enabled device String port Required Port number of the SNMP enabled device Integer 311.3.2. Query Parameters (35 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delay (consumer) Sets update rate in seconds 60000 long oids (consumer) Defines which values you are interested in. Please have a look at the Wikipedia to get a better understanding. You may provide a single OID or a coma separated list of OIDs. Example: oids=1.3.6.1.2.1.1.3.0,1.3.6.1.2.1.25.3.2.1.5.1,1.3.6.1.2.1.25.3.5.1.1.1,1.3.6.1.2.1.43.5.1.1.11.1 String protocol (consumer) Here you can select which protocol to use. You can use either udp or tcp. udp String retries (consumer) Defines how often a retry is made before canceling the request. 2 int sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean snmpCommunity (consumer) Sets the community octet string for the snmp request. public String snmpContextEngineId (consumer) Sets the context engine ID field of the scoped PDU. String snmpContextName (consumer) Sets the context name field of this scoped PDU. String snmpVersion (consumer) Sets the snmp version for the request. The value 0 means SNMPv1, 1 means SNMPv2c, and the value 3 means SNMPv3 0 int timeout (consumer) Sets the timeout value for the request in millis. 1500 int treeList (consumer) Sets the flag whether the scoped PDU will be displayed as the list if it has child elements in its tree false boolean type (consumer) Which operation to perform such as poll, trap, etc. SnmpActionType exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean authenticationPassphrase (security) The authentication passphrase. If not null, authenticationProtocol must also be not null. RFC3414 11.2 requires passphrases to have a minimum length of 8 bytes. If the length of authenticationPassphrase is less than 8 bytes an IllegalArgumentException is thrown. String authenticationProtocol (security) Authentication protocol to use if security level is set to enable authentication The possible values are: MD5, SHA1 String privacyPassphrase (security) The privacy passphrase. If not null, privacyProtocol must also be not null. RFC3414 11.2 requires passphrases to have a minimum length of 8 bytes. If the length of authenticationPassphrase is less than 8 bytes an IllegalArgumentException is thrown. String privacyProtocol (security) The privacy protocol ID to be associated with this user. If set to null, this user only supports unencrypted messages. String securityLevel (security) Sets the security level for this target. The supplied security level must be supported by the security model dependent information associated with the security name set for this target. The value 1 means: No authentication and no encryption. Anyone can create and read messages with this security level The value 2 means: Authentication and no encryption. Only the one with the right authentication key can create messages with this security level, but anyone can read the contents of the message. The value 3 means: Authentication and encryption. Only the one with the right authentication key can create messages with this security level, and only the one with the right encryption/decryption key can read the contents of the message. 3 int securityName (security) Sets the security name to be used with this target. String 311.4. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.snmp.enabled Enable snmp component true Boolean camel.component.snmp.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 311.5. The result of a poll Given the situation, that I poll for the following OIDs: OIDs The result will be the following: Result of toString conversion <?xml version="1.0" encoding="UTF-8"?> <snmp> <entry> <oid>1.3.6.1.2.1.1.3.0</oid> <value>6 days, 21:14:28.00</value> </entry> <entry> <oid>1.3.6.1.2.1.25.3.2.1.5.1</oid> <value>2</value> </entry> <entry> <oid>1.3.6.1.2.1.25.3.5.1.1.1</oid> <value>3</value> </entry> <entry> <oid>1.3.6.1.2.1.43.5.1.1.11.1</oid> <value>6</value> </entry> <entry> <oid>1.3.6.1.2.1.1.1.0</oid> <value>My Very Special Printer Of Brand Unknown</value> </entry> </snmp> As you maybe recognized there is one more result than requested... .1.3.6.1.2.1.1.1.0. This one is filled in by the device automatically in this special case. So it may absolutely happen, that you receive more than you requested... be prepared. OID starting with dot representation As you may notice, default snmpVersion is 0 which means version1 in the endpoint if it is not set explicitly. Make sure you explicitly set snmpVersion which is not default value, of course in a case of where you are able to query SNMP tables with different versions. Other possible values are version2c and version3 . 311.6. Examples Polling a remote device: Setting up a trap receiver ( Note that no OID info is needed here! ): From Camel 2.10.0 , you can get the community of SNMP TRAP with message header 'securityName', peer address of the SNMP TRAP with message header 'peerAddress'. Routing example in Java: (converts the SNMP PDU to XML String) from("snmp:192.168.178.23:161?protocol=udp&type=POLL&oids=1.3.6.1.2.1.1.5.0"). convertBodyTo(String.class). to("activemq:snmp.states"); 311.7. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-snmp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"snmp://hostname[:port][?Options]",
"snmp:host:port",
"1.3.6.1.2.1.1.3.0 1.3.6.1.2.1.25.3.2.1.5.1 1.3.6.1.2.1.25.3.5.1.1.1 1.3.6.1.2.1.43.5.1.1.11.1",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <snmp> <entry> <oid>1.3.6.1.2.1.1.3.0</oid> <value>6 days, 21:14:28.00</value> </entry> <entry> <oid>1.3.6.1.2.1.25.3.2.1.5.1</oid> <value>2</value> </entry> <entry> <oid>1.3.6.1.2.1.25.3.5.1.1.1</oid> <value>3</value> </entry> <entry> <oid>1.3.6.1.2.1.43.5.1.1.11.1</oid> <value>6</value> </entry> <entry> <oid>1.3.6.1.2.1.1.1.0</oid> <value>My Very Special Printer Of Brand Unknown</value> </entry> </snmp>",
".1.3.6.1.4.1.6527.3.1.2.21.2.1.50",
"snmp:192.168.178.23:161?protocol=udp&type=POLL&oids=1.3.6.1.2.1.1.5.0",
"snmp:127.0.0.1:162?protocol=udp&type=TRAP",
"from(\"snmp:192.168.178.23:161?protocol=udp&type=POLL&oids=1.3.6.1.2.1.1.5.0\"). convertBodyTo(String.class). to(\"activemq:snmp.states\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/snmp-component |
function::cpu | function::cpu Name function::cpu - Returns the current cpu number. Synopsis Arguments None General Syntax cpu: long Description This function returns the current cpu number. | [
"function cpu:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-cpu |
Chapter 8. Building a customized boot menu | Chapter 8. Building a customized boot menu You can build a boot menu containing specific entries or change the order of the entries. For such a task, you can use GRUB, grubby , and Boot Loader Specification ( BLS ) files. The following sections provide information about using GRUB and grubby to do basic customization of the boot menu. 8.1. The GRUB configuration file Learn about the boot loader configuration file that is /boot/grub2/grub.cfg on BIOS-based machines and /boot/efi/EFI/redhat/grub.cfg on UEFI-based machines. GRUB scripts search the user's computer and build a boot menu based on what operating systems the scripts find. To reflect the latest system boot options, the boot menu is rebuilt automatically when the kernel is updated or a new kernel is added. GRUB uses a series of scripts, located in the /etc/grub.d/ directory, to build the menu. The scripts include the following files: 00_header , which loads GRUB settings from the /etc/default/grub file. 01_users , which reads the root password from the user.cfg file. 10_linux , which locates kernels in the default partition of Red Hat Enterprise Linux. 30_os-prober , which builds entries for operating systems found on other partitions. 40_custom , a template used to create additional menu entries. GRUB reads scripts from the /etc/grub.d/ directory in alphabetical order and therefore you can rename them to change the boot order of specific menu entries. 8.2. Hiding the list of bootable kernels You can prevent GRUB from displaying the list of bootable kernels when the system starts up. Procedure Set the GRUB_TIMEOUT_STYLE option in the /etc/default/grub file as follows: Rebuild the grub.cfg file for the changes to take effect. On BIOS-based machines, enter: On UEFI-based machines, enter: Press the Esc key to display the list of bootable kernels when booting. Important Do not set GRUB_TIMEOUT to 0 in the /etc/default/grub file to hide the list of bootable kernels. With such a setting, the system always boots immediately on the default menu entry, and if the default kernel fails to boot, it is not possible to boot any kernel. 8.3. Changing the default boot entry with the GRUB configuration file You can specify the default kernel package type, and change the default boot entry. Procedure Specify which operating system or kernel must be loaded by default by passing its index to the grub2-set-default command, for example: GRUB supports using a numeric value as the key for the saved_entry directive in /boot/grub2/grubenv to change the default order in which the operating systems are loaded. Note Index counting starts with zero. Therefore, GRUB loads the second entry. With the installed kernel, the index value will be overwritten. Note You can also use grubby to find indices for kernels. For more information, see Viewing the GRUB Menu Entry for a Kernel . Optional: Force the system to always use a particular menu entry: List the available menu entries: Use the menu entry name or the number of the position of a menu entry in the list as the key to the GRUB_DEFAULT directive in the /etc/default/grub file. For example: Rebuild the grub.cfg file for the changes to take effect. On BIOS-based machines, enter: On UEFI-based machines: | [
"GRUB_TIMEOUT_STYLE=hidden",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"grubby --set-default-index=1 The default is /boot/loader/entries/d5151aa93c444ac89e78347a1504d6c6-4.18.0-348.el8.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-348.el8.x86_64",
"grubby --info=ALL",
"GRUB_DEFAULT=example-gnu-linux",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/assembly_building-a-customized-boot-menu_managing-monitoring-and-updating-the-kernel |
function::task_cpu | function::task_cpu Name function::task_cpu - The scheduled cpu of the task Synopsis Arguments task task_struct pointer Description This function returns the scheduled cpu for the given task. | [
"task_cpu:long(task:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-cpu |
Managing high availability services | Managing high availability services Red Hat OpenStack Platform 17.1 Plan, deploy, and manage high availability in Red Hat OpenStack Platform OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/index |
Chapter 1. Access control | Chapter 1. Access control Access control might need to manually be created and managed. You must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM). For more information, see Understanding authentication in the OpenShift Container Platform documentation. Role-based access control and authentication identifies the user associated roles and cluster credentials. See the following documentation for information about access and credentials. Required access: Cluster administrator Role-based access control Implementing role-based access control Bringing your own observability Certificate Authority (CA) certificates 1.1. Role-based access control Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation . Note: Action buttons are disabled from the console if the user-role access is impermissible. 1.1.1. Overview of roles Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes: Table 1.1. Role definition table Role Definition cluster-admin This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access. open-cluster-management:cluster-manager-admin A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a Red Hat Advanced Cluster Management for Kubernetes super user, who has all access. This role allows the user to create a ManagedCluster resource. open-cluster-management:admin:<managed_cluster_name> A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name> . When a user has a managed cluster, this role is automatically created. open-cluster-management:view:<managed_cluster_name> A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name> . open-cluster-management:managedclusterset:admin:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name> . The user also has administrator access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name> . A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource. open-cluster-management:managedclusterset:view:<managed_clusterset_name> A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io , clusterset=<managed_clusterset_name> . For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet . open-cluster-management:subscription-admin A user with the open-cluster-management:subscription-admin role can create Git subscriptions that deploy resources to multiple namespaces. The resources are specified in Kubernetes resource YAML files in the subscribed Git repository. Note: When a non-subscription-admin user creates a subscription, all resources are deployed into the subscription namespace regardless of specified namespaces in the resources. For more information, see the Application lifecycle RBAC section. admin, edit, view Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide. open-cluster-management:managedclusterset:bind:<managed_clusterset_name> A user with the open-cluster-management:managedclusterset:bind:<managed_clusterset_name> role has view access to the managed cluster resource called <managed_clusterset_name> . The user can bind <managed_clusterset_name> to a namespace. The user also has view access to managedcluster.cluster.open-cluster-management.io , clusterclaim.hive.openshift.io , clusterdeployment.hive.openshift.io , and clusterpool.hive.openshift.io resources, which have the following managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name> . See Creating a ManagedClusterSet to learn how to manage the resource. Important: Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace. If a user does not have role access to a cluster, the cluster name is not displayed. The cluster name might be displayed with the following symbol: - . See Implementing role-based access control for more details. 1.2. Implementing role-based access control Red Hat Advanced Cluster Management for Kubernetes RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. The multicluster engine operator is a prerequisite and the cluster lifecycle function of Red Hat Advanced Cluster Management. To manage RBAC for clusters with the multicluster engine operator, use the RBAC guidance from the cluster lifecycle multicluster engine for Kubernetes operator Role-based access control documentation. View the following sections for more information on RBAC for specific lifecycles for Red Hat Advanced Cluster Management: Application lifecycle RBAC Console and API RBAC table for application lifecycle Governance lifecycle RBAC Console and API RBAC table for governance lifecycle Observability RBAC Console and API RBAC table for observability lifecycle 1.2.1. Application lifecycle RBAC When you create an application, the subscription namespace is created and the configuration map is created in the subscription namespace. You must also have access to the channel namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating an allow and deny list as subscription administrator . View the following application lifecycle RBAC operations: Create and administer applications on all managed clusters with a user named username . You must create a cluster role binding and bind it to username . Run the following command: This role is a super user, which has access to all resources and actions. You can create the namespace for the application and all application resources in the namespace with this role. Create applications that deploy resources to multiple namespaces. You must create a cluster role binding to the open-cluster-management:subscription-admin cluster role, and bind it to a user named username . Run the following command: Create and administer applications in the cluster-name managed cluster, with the username user. You must create a cluster role binding to the open-cluster-management:admin:<cluster-name> cluster role and bind it to username by entering the following command: This role has read and write access to all application resources on the managed cluster, cluster-name . Repeat this if access for other managed clusters is required. Create a namespace role binding to the application namespace using the admin role and bind it to username by entering the following command: This role has read and write access to all application resources in the application namspace. Repeat this if access for other applications is required or if the application deploys to multiple namespaces. You can create applications that deploy resources to multiple namespaces. Create a cluster role binding to the open-cluster-management:subscription-admin cluster role and bind it to username by entering the following command: To view an application on a managed cluster named cluster-name with the user named username , create a cluster role binding to the open-cluster-management:view: cluster role and bind it to username . Enter the following command: This role has read access to all application resources on the managed cluster, cluster-name . Repeat this if access for other managed clusters is required. Create a namespace role binding to the application namespace using the view role and bind it to username . Enter the following command: This role has read access to all application resources in the application namspace. Repeat this if access for other applications is required. 1.2.1.1. Console and API RBAC table for application lifecycle View the following console and API RBAC tables for Application lifecycle: Table 1.2. Console RBAC table for application lifecycle Resource Admin Edit View Application create, read, update, delete create, read, update, delete read Channel create, read, update, delete create, read, update, delete read Subscription create, read, update, delete create, read, update, delete read Table 1.3. API RBAC table for application lifecycle API Admin Edit View applications.app.k8s.io create, read, update, delete create, read, update, delete read channels.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read deployables.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read helmreleases.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read placements.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read placementrules.apps.open-cluster-management.io (Deprecated) create, read, update, delete create, read, update, delete read subscriptions.apps.open-cluster-management.io create, read, update, delete create, read, update, delete read configmaps create, read, update, delete create, read, update, delete read secrets create, read, update, delete create, read, update, delete read namespaces create, read, update, delete create, read, update, delete read 1.2.2. Governance lifecycle RBAC To perform governance lifecycle operations, you need access to the namespace where the policy is created, along with access to the managed cluster where the policy is applied. The managed cluster must also be part of a ManagedClusterSet that is bound to the namespace. To continue to learn about ManagedClusterSet , see ManagedClusterSets Introduction . After you select a namespace, such as rhacm-policies , with one or more bound ManagedClusterSets , and after you have access to create Placement objects in the namespace, view the following operations: To create a ClusterRole named rhacm-edit-policy with Policy , PlacementBinding , and PolicyAutomation edit access, run the following command: To create a policy in the rhacm-policies namespace, create a namespace RoleBinding , such as rhacm-edit-policy , to the rhacm-policies namespace using the ClusterRole created previously. Run the following command: To view policy status of a managed cluster, you need permission to view policies in the managed cluster namespace on the hub cluster. If you do not have view access, such as through the OpenShift view ClusterRole , create a ClusterRole , such as rhacm-view-policy , with view access to policies with the following command: To bind the new ClusterRole to the managed cluster namespace, run the following command to create a namespace RoleBinding : 1.2.2.1. Console and API RBAC table for governance lifecycle View the following console and API RBAC tables for governance lifecycle: Table 1.4. Console RBAC table for governance lifecycle Resource Admin Edit View Policies create, read, update, delete read, update read PlacementBindings create, read, update, delete read, update read Placements create, read, update, delete read, update read PlacementRules (deprecated) create, read, update, delete read, update read PolicyAutomations create, read, update, delete read, update read Table 1.5. API RBAC table for governance lifecycle API Admin Edit View policies.policy.open-cluster-management.io create, read, update, delete read, update read placementbindings.policy.open-cluster-management.io create, read, update, delete read, update read policyautomations.policy.open-cluster-management.io create, read, update, delete read, update read Continue to learn about securing your cluster, see Security overview . 1.2.3. Observability RBAC To view the observability metrics for a managed cluster, you must have view access to that managed cluster on the hub cluster. View the following list of observability features: Access managed cluster metrics. Users are denied access to managed cluster metrics, if they are not assigned to the view role for the managed cluster on the hub cluster. Run the following command to verify if a user has the authority to create a managedClusterView role in the managed cluster namespace: As a cluster administrator, create a managedClusterView role in the managed cluster namespace. Run the following command: Then apply and bind the role to a user by creating a role bind. Run the following command: Search for resources. To verify if a user has access to resource types, use the following command: Note: <resource-type> must be plural. To view observability data in Grafana, you must have a RoleBinding resource in the same namespace of the managed cluster. View the following RoleBinding example: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <replace-with-name-of-rolebinding> namespace: <replace-with-name-of-managedcluster-namespace> subjects: - kind: <replace with User|Group|ServiceAccount> apiGroup: rbac.authorization.k8s.io name: <replace with name of User|Group|ServiceAccount> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view See Role binding policy for more information. See Customizing observability to configure observability. 1.2.3.1. Console and API RBAC table for observability lifecycle To manage components of observability, view the following API RBAC table: Table 1.6. API RBAC table for observability API Admin Edit View multiclusterobservabilities.observability.open-cluster-management.io create, read, update, and delete read, update read searchcustomizations.search.open-cluster-management.io create, get, list, watch, update, delete, patch - - policyreports.wgpolicyk8s.io get, list, watch get, list, watch get, list, watch 1.3. Bringing your own observability Certificate Authority (CA) certificates When you install Red Hat Advanced Cluster Management for Kubernetes, only Certificate Authority (CA) certificates for observability are provided by default. If you do not want to use the default observability CA certificates generated by Red Hat Advanced Cluster Management, you can choose to bring your own observability CA certificates before you enable observability. 1.3.1. Generating CA certificates by using OpenSSL commands Observability requires two CA certificates, one for the server-side and the other is for the client-side. Generate your CA RSA private keys with the following commands: openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048 Generate the self-signed CA certificates using the private keys. Run the following commands: openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem 1.3.2. Creating the secrets associated with your own observability CA certificates Complete the following steps to create the secrets: Create the observability-server-ca-certs secret by using your certificate and private key. Run the following command: oc -n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem Create the observability-client-ca-certs secret by using your certificate and private key. Run the following command: oc -n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem 1.3.3. Additional resources See Customizing route certification . See Customizing certificates for accessing the object store . | [
"create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>",
"create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>",
"create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>",
"create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=admin --user=<username>",
"create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>",
"create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>",
"create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=view --user=<username>",
"create clusterrole rhacm-edit-policy --resource=policies.policy.open-cluster-management.io,placementbindings.policy.open-cluster-management.io,policyautomations.policy.open-cluster-management.io,policysets.policy.open-cluster-management.io --verb=create,delete,get,list,patch,update,watch",
"create rolebinding rhacm-edit-policy -n rhacm-policies --clusterrole=rhacm-edit-policy --user=<username>",
"create clusterrole rhacm-view-policy --resource=policies.policy.open-cluster-management.io --verb=get,list,watch",
"create rolebinding rhacm-view-policy -n <cluster name> --clusterrole=rhacm-view-policy --user=<username>",
"auth can-i create ManagedClusterView -n <managedClusterName> --as=<user>",
"create role create-managedclusterview --verb=create --resource=managedclusterviews -n <managedClusterName>",
"create rolebinding user-create-managedclusterview-binding --role=create-managedclusterview --user=<user> -n <managedClusterName>",
"auth can-i list <resource-type> -n <namespace> --as=<rbac-user>",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <replace-with-name-of-rolebinding> namespace: <replace-with-name-of-managedcluster-namespace> subjects: - kind: <replace with User|Group|ServiceAccount> apiGroup: rbac.authorization.k8s.io name: <replace with name of User|Group|ServiceAccount> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view",
"openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048",
"openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem",
"-n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem",
"-n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/access_control/access-control |
7.11. Creating a Cloned Virtual Machine Based on a Template | 7.11. Creating a Cloned Virtual Machine Based on a Template Cloned virtual machines are based on templates and inherit the settings of the template. A cloned virtual machine does not depend on the template on which it was based after it has been created. This means the template can be deleted if no other dependencies exist. Note If you clone a virtual machine from a template, the name of the template on which that virtual machine was based is displayed in the General tab of the Edit Virtual Machine window for that virtual machine. If you change the name of that template, the name of the template in the General tab will also be updated. However, if you delete the template from the Manager, the original name of that template will be displayed instead. Cloning a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Based on Template drop-down menu. Enter a Name , Description and any Comments . You can accept the default values inherited from the template in the rest of the fields, or change them if required. Click the Resource Allocation tab. Select the Clone radio button in the Storage Allocation area. Select the disk format from the Format drop-down list. This affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires. QCOW2 (Default) Faster clone operation Optimized use of storage capacity Disk space allocated only as required Raw Slower clone operation Optimized virtual machine read and write operations All disk space requested in the template is allocated at the time of the clone operation Use the Target drop-down menu to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . Note Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked , then Down . The virtual machine is created and displayed in the Virtual Machines tab. You can now assign users to it, and can begin using it when the clone operation is complete. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Creating_a_cloned_virtual_machine_based_on_a_template |
4.4. Configuring the Samba Cluster Resources | 4.4. Configuring the Samba Cluster Resources This section provides the procedure for configuring the Samba cluster resources for this use case. The following procedure creates a snapshot of the cluster's cib file named samba.cib and adds the resources to that test file rather then configuring them directly on the running cluster. After the resources and constraints are configured, the procedure pushes the contents of samba.cib to the running cluster configuration file. On one node of the cluster, run the following procedure. Create a snapshot of the cib file, which is the cluster configuration file. Create the CTDB resource to be used by Samba. Create this resource as a cloned resource so that it will run on both cluster nodes. Create the cloned Samba server. Create the colocation and order constraints for the cluster resources. The startup order is Filesystem resource, CTDB resource, then Samba resource. Push the content of the cib snapshot to the cluster. Check the status of the cluster to verify that the resource is running. Note that in Red Hat Enterprise Linux 7.4 it can take a couple of minutes for CTDB to start Samba, export the shares, and stabilize. If you check the cluster status before this process has completed, you may see a message that the CTDB status call failed. Once this process has completed, you can clear this message from the display by running the pcs resource cleanup ctdb-clone command. Note If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. This starts the service outside of the cluster's control and knowledge. If the configured resources are running again, run pcs resource cleanup resource to make the cluster aware of the updates. For information on the pcs resource debug-start command, see the Enabling, Disabling, and Banning Cluster Resources section in the High Availability Add-On Reference manual. | [
"pcs cluster cib samba.cib",
"pcs -f samba.cib resource create ctdb ocf:heartbeat:CTDB ctdb_recovery_lock=\"/mnt/gfs2share/ctdb/ctdb.lock\" ctdb_dbdir=/var/ctdb ctdb_socket=/tmp/ctdb.socket ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100 --clone",
"pcs -f samba.cib resource create samba systemd:smb --clone",
"pcs -f samba.cib constraint order fs-clone then ctdb-clone Adding fs-clone ctdb-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs -f samba.cib constraint order ctdb-clone then samba-clone Adding ctdb-clone samba-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs -f samba.cib constraint colocation add ctdb-clone with fs-clone pcs -f samba.cib constraint colocation add samba-clone with ctdb-clone",
"pcs cluster cib-push samba.cib CIB updated",
"pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 1.1.16-12.el7_4.2-94ff4df) - partition with quorum Last updated: Thu Oct 19 18:17:07 2017 Last change: Thu Oct 19 18:16:50 2017 by hacluster via crmd on z1.example.com 2 nodes configured 11 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Clone Set: dlm-clone [dlm] Started: [ z1.example.com z2.example.com ] Clone Set: clvmd-clone [clvmd] Started: [ z1.example.com z2.example.com ] Clone Set: fs-clone [fs] Started: [ z1.example.com z2.example.com ] Clone Set: ctdb-clone [ctdb] Started: [ z1.example.com z2.example.com ] Clone Set: samba-clone [samba] Started: [ z1.example.com z2.example.com ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-resourcegroupcreatesamba-HAAA |
24.6. Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs | 24.6. Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs Users can request certificates for themselves when permitted by the Certificate Authority access control lists (CA ACLs). The following procedures use certificate profiles and CA ACLs, which are described separately in Section 24.4, "Certificate Profiles" and Section 24.5, "Certificate Authority ACL Rules" . For more details about using certificate profiles and CA ACLs, see these sections. Issuing Certificates to Users from the Command Line Create or import a new custom certificate profile for handling requests for user certificates. For example: Add a new Certificate Authority (CA) ACL that will be used to permit requesting certificates for user entries. For example: Add the custom certificate profile to the CA ACL. Generate a certificate request for the user. For example, using OpenSSL: Run the ipa cert-request command to have the IdM CA issue a new certificate for the user. Optionally pass the --ca sub-CA_name option to the command to request the certificate from a sub-CA instead of the root CA ipa . To make sure the newly-issued certificate is assigned to the user, you can use the ipa user-show command: Issuing Certificates to Users in the Web UI Create or import a new custom certificate profile for handling requests for user certificates. Importing profiles is only possible from the command line, for example: For information about certificate profiles, see Section 24.4, "Certificate Profiles" . In the web UI, under the Authentication tab, open the CA ACLs section. Figure 24.11. CA ACL Rules Management in the Web UI Click Add at the top of the list of Certificate Authority (CA) ACLs to add a new CA ACL that permits requesting certificates for user entries. In the Add CA ACL window that opens, fill in the required information about the new CA ACL. Figure 24.12. Adding a New CA ACL Then, click Add and Edit to go directly to the CA ACL configuration page. In the CA ACL configuration page, scroll to the Profiles section and click Add at the top of the profiles list. Figure 24.13. Adding a Certificate Profile to the CA ACL Add the custom certificate profile to the CA ACL by selecting the profile and moving it to the Prospective column. Figure 24.14. Selecting a Certificate Profile Then, click Add . Scroll to the Permitted to have certificates issued section to associate the CA ACL with users or user groups. You can either add users or groups using the Add buttons, or select the Anyone option to associate the CA ACL with all users. Figure 24.15. Adding Users to the CA ACL In the Permitted to have certificates issued section, you can associate the CA ACL with one or more CAs. You can either add CAs using the Add button, or select the Any CA option to associate the CA ACL with all CAs. Figure 24.16. Adding CAs to the CA ACL At the top of the CA ACL configuration page, click Save to confirm the changes to the CA ACL. Request a new certificate for the user. Under the Identity tab and the Users subtab, choose the user for whom the certificate will be requested. Click on the user's user name to open the user entry configuration page. Click Actions at the top of the user configuration page, and then click New Certificate . Figure 24.17. Requesting a Certificate for a User Fill in the required information. Figure 24.18. Issuing a Certificate for a User Then, click Issue . After this, the newly issued certificate is visible in the user configuration page. | [
"ipa certprofile-import certificate_profile --file= certificate_profile.cfg --store=True",
"ipa caacl-add users_certificate_profile --usercat=all",
"ipa caacl-add-profile users_certificate_profile --certprofiles= certificate_profile",
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= user '",
"ipa cert-request cert.csr --principal= user --profile-id= certificate_profile",
"ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA",
"ipa certprofile-import certificate_profile --file= certificate_profile.txt --store=True"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/issue-user-certificates |
Appendix A. S3 common request headers | Appendix A. S3 common request headers The following table lists the valid common request headers and their descriptions. Table A.1. Request Headers Request Header Description CONTENT_LENGTH Length of the request body. DATE Request time and date (in UTC). HOST The name of the host server. AUTHORIZATION Authorization token. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/s3-common-request-headers_dev |
7.128. libvirt | 7.128. libvirt 7.128.1. RHBA-2013:0664 - libvirt bug fix and enhancement update Updated libvirt packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 908836 The AMD family 15h processors CPU architecture consists of "modules", which are represented both as separate cores and separate threads. Management applications needed to choose between one of the approaches, and libvirt did not provide enough information to do this. Management applications were not able to represent the modules in an AMD family 15h processors core according to their needs. The capabilities XML output now contains more information about the processor topology, so that the management applications can extract the information they need. BZ# 913624 When auto-port and port were not specified, but the tlsPort attribute was set to "-1", the tlsPort parameter specified in the QEMU command line was set to "1" instead of a valid port. Consequently, QEMU failed, because it was unable to bind a socket on the port. This update replaces the current QEMU driver code for managing port reservations with the new virPortAllocator APIs, and QEMU is able to bind a socket on the port. BZ# 915344 Previously, libvirtd was unable to execute an s3/s4 operation for a Microsoft Windows guest which ran the guest agent service. Consequently, this resulted in a "domain s4 fail" error message, due to the domain being destroyed. With this update, the guest is destroyed successfully and the libvirtd service no longer crashes. BZ# 915347 When a VM was saved into a compressed file and decompression of that file failed while libvirt was trying to resume the VM, libvirt removed the VM from the list of running VMs, but did not remove the corresponding QEMU process. With this update, the QEMU process is killed in such cases. Moreover, non-fatal decompression errors are now ignored and a VM can be successfully resumed if such an error occurs. BZ# 915348 Python bindings for libvirt contained incorrect implementation of getDomain() and getConnect() methods in virDomainSnapshot class. Consequently, the Python client terminated unexpectedly with a segmentation fault. Python bindings now provide proper domain() and connect() accessors that fetch Python objects stored internally within virDomainSnapshot instance and crashes no longer occur. BZ# 915349 Previously, libvirt added a cache of storage file backing chains, rather than rediscovering the backing chain details on every operation. This cache was then used to decide which files to label for sVirt, but when libvirt switched over to use the cache, the code only populated when cgroups were in use. On setups that did not use cgroups, due to the lack of backing chain cache information, sVirt was unable to properly label backing chain files, which caused a regression observed by guests being prevented from running. Now, populating the cache was moved earlier, to be independent of cgroups, the cache results in more efficient sVirt operations, and now works whether or not cgroups are in effect. BZ# 915353 Occasionally, when users ran multiple virsh create/destroy loops, a race condition could have occurred and libvirtd terminated unexpectedly with a segmentation fault. False error messages regarding the domain having already been destroyed to the caller also occurred. With this update, the outlined script is run and completes without libvirtd crashing. BZ# 915354 Previously, libvirt followed relative backing chains differently than QEMU. This resulted in missing sVirt permissions when libvirt could not follow the chain. With this update, relative backing files are now treated identically in libvirt and QEMU, and VDSM use of relative backing files functions properly. BZ#915363 Previously, libvirt reported raw QEMU errors when snapshots failed, and the error message provided was confusing. With this update, libvirt now gives a clear error message when QEMU is not capable of snapshots, which enables more informative handling of the situation. BZ#917063 Previously, libvirt was not tolerant of missing unpriv_sgio support in running kernel even though it was not necessary. After upgrading the host system to Red Hat Enterprise Linux 6.4, users were unable to start domains using shareable block disk devices unless they rebooted the host into the new kernel. The check for unpriv_sgio support is only performed when it is really needed, and libvirt is now able to start all domains that do not strictly require unpriv_sgio support regardless of host kernel support for it. BZ#918754 When asked to create a logical volume with zero allocation, libvirt ran lvcreate to create a volume with no extends, which is not permitted. Creation of logical volumes with zero allocation failed and libvirt returned an error message that did not mention the real error. Now, rather than asking for no extends, libvirt tries to create the volume with a minimal number of extends. The code is also fixed to provide the real error message should the volume creation process fail. Logical volumes with zero allocation can now be successfully created using libvirt. BZ# 919504 Previously, when users started the guest with a sharable block CD-Rom, libvirtd failed unexpectedly due to accessing memory that was already freed. This update addresses the aforementioned issue, and libvirtd no longer crashes in the described scenario. BZ#922095 Various memory leaks in libvirtd were discovered when users ran Coverity and Valgrind leak detection tools. This update addresses these issues, and libvirtd no longer leaks memory in the described scenario. Enhancement BZ# 915352 This update adds support for ram_size settings to the QXL device. When using multiple heads in one PCI device, the device needed more RAM assigned. Now, the memory of the RAM bar size is set larger than the default size and libvirt can drive multi-head QXL. Users of libvirt are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. After installing the updated packages, libvirtd will be restarted automatically. 7.128.2. RHSA-2013:0276 - Moderate: libvirt bug fix, and enhancement update Updated libvirt packages that fix one security issue, multiple bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The libvirt packages provide the libvirt library which is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Note The libvirt packages have been upgraded to upstream version 0.10.2, which provides a number of bug fixes and enhancements over the version, such as support for Open vSwitch , a new API for detailed CPU statistics, improved support of LXC method including the sVirt technology, improvements of the virsh edit command, improved APIs for listing various objects and support for pinning and tuning emulator threads. (BZ# 836934 ) Security Fixes CVE-2012-3411 It was discovered that libvirt made certain invalid assumptions about dnsmasq's command line options when setting up DNS masquerading for virtual machines, resulting in dnsmasq incorrectly processing network packets from network interfaces that were intended to be prohibited. This update includes the changes necessary to call dnsmasq with a new command line option, which was introduced to dnsmasq via RHSA-2013:0277. In order for libvirt to be able to make use of the new command line option (--bind-dynamic), updated dnsmasq packages need to be installed. Refer to RHSA-2013:0277 for additional information. Bug Fixes BZ#794523 The libvirt library was issuing the PAUSED event before the QEMU processor emulator really paused. Consequently, a domain could be reported as paused before it was actually paused, which could confuse a management application using the libvirt library. With this update, the PAUSED event is started after QEMU is stopped on a monitor and the management application is no longer confused by libvirt . BZ# 797279 , BZ# 808980 , BZ# 869557 The fixed limit for the maximum size of an RPC message that could be sent between the libvirtd daemon and a client, such as the virsh utility, was 65536 bytes. However, this limit was not always sufficient and messages that were longer than that could be dropped, leaving a client unable to fetch important data. With this update, the buffer for incoming messages has been made dynamic and both sides, a client and libvirtd , now allocate as much memory as is needed for a given message, thus allowing to send much bigger messages. BZ# 807996 Previously, repeatedly migrating a guest between two machines while using the tunnelled migration could cause the libvirtd daemon to lock up unexpectedly. The bug in the code for locking remote drivers has been fixed and repeated tunnelled migrations of domains now work as expected. BZ#814664 Previously, multiple libvirt API calls were needed to determine the full list of guests on a host controlled by the libvirt library. Consequently, a race condition could occur when a guest changed its state between two calls that were needed to enumerate started and stopped guests. This behavior caused the guest to disappear from both of the lists, because the time of enumeration was not considered to be a part of the lists. This update adds a new API function allowing to gather the guest list in one call while the driver is locked. This guarantees that no guest changes its state before the list is gathered so that guests no longer disappear in the described scenario. BZ# 818467 Previously, libvirt did not report many useful error messages that were returned by external programs such as QEMU and only reported a command failure. Consequently, certain problems, whose cause or resolution could be trivial to discover by looking at the error output, were difficult to diagnose. With this update, if any external command run by libvirt exits with a failure, its standard error output is added to the system log as a libvirt error. As a result, problems are now easier to diagnose, because better information is available. BZ#823716 Closing a file descriptor multiple times could, under certain circumstances, lead to a failure to execute the qemu-kvm binary. As a consequence, a guest failed to start. A patch has been applied to address this issue, so that the guest now starts successfully. BZ#825095 Prior to this update, libvirt used an unsuitable detection procedure to detect NUMA and processor topology of a system. Consequently, topology of some advanced multi-processor systems was detected incorrectly and management applications could not utilize the full potential of the system. Now, the detection has been improved and the topology is properly recognized even on modern systems. BZ# 825820 Previously, the libvirt library had hooks for calling a user-written script when a guest was started or stopped, but had no hook to call a script for each guest when the libvirtd daemon itself was restarted. Consequently, certain custom setups that required extra operations not directly provided by libvirt could fail when libvirtd was restarted. For example, packet forwarding rules installed to redirect incoming connections to a particular guest could be overridden by libvirt 's " refresh " of its own iptables packet forwarding rules, breaking the connection forwarding that had been set up. This update improves libvirt with a new " reconnect " hook; the QEMU hook script is called with a type of " reconnect " for every active guest each time libvirtd is restarted. Users can now write scripts to recognize the " reconnect " event, and for example reload the user-supplied iptables forwarding rules when this event occurs. As a result, incoming connections continue to be forwarded correctly, even when libvirtd is restarted. BZ# 828729 On certain NUMA architectures, libvirt failed to process and expose the NUMA topology, sometimes leading to performance degradation. With this update, libvirt can parse and expose the NUMA topology on such machines and makes the correct CPU placement, thus avoiding performance degradation. BZ#831877 The virsh undefine command supports deleting volumes associated with a domain. When using this command, the volumes are passed as additional arguments and if the user adds any trailing string after the basic command, the string is interpreted as a volume to be deleted. Previously, the volumes were checked after the guest was deleted, which could lead to user's errors. With this update, the check of the volume arguments is performed before the deleting process so that errors can be reported sensibly. As a result, the command with an incorrect argument fails before it attempts to delete a guest and the host system stays in a sane state. BZ# 832081 Due to several bugs in the implementation of keep-alive messages that are used for the detection of broken connections or non-functional peers, these connections and peers could be incorrectly considered broken or non-functional and thus the keep-alive messages were disabled by default in Red Hat Enterprise Linux 6.3. The implementation of the keep-alive messages has been fixed and this feature is now enabled by default. BZ# 834927 Previously, a reversed condition in a check which is used during registering callbacks prevented multiple callbacks from being registered. This update applies a patch to fix this condition and multiple callbacks can be registered successfully now. BZ# 836135 The SPICE server needs certain time at the end of the migration process to transfer an internal state to a destination guest. Previously, the libvirt library could kill the source QEMU and the SPICE server before the internal state was transmitted. This behavior caused the destination client to be unresponsive. With this update, libvirt waits until the end of SPICE migration. As a result, the SPICE server no longer becomes unresponsive in this situation. BZ#837659 When using the sanlock daemon for locking resources used by a domain, if such a resource was read-only, the locking attempt failed. Consequently, it was impossible to start a domain with a CD-ROM drive. This bug has been fixed and sanlock can now be properly used with read-only devices. BZ# 839661 Previously, the libvirt library did not support the S4 (Suspend-to-Disk) event on QEMU domains. Consequently, management applications could not register whether a guest was suspended to disk or powered off. With this update, support for S4 event has been added and management applications can now request receiving S4 events. BZ# 842208 Due to an installation of the vdsm daemon, the libvirt library was reconfigured and under certain conditions, libvirt was searching for a non-existing option when used outside of vdsm . Consequently, using the virsh utility on such a machine caused the system to terminate with a segmentation fault. The underlying source code has been modified to fix this bug and users can now use virsh on machines configured by vdsm as expected. BZ# 844266 Previously, a condition in a check, which is used for checking if modification of a domain XML in a saved file was successful or not, was inverted. Consequently, the virsh utility reported that this check failed even if it was successful and vice versa. This update applies a patch to fix this bug and success and failure of this check are reported correctly now. BZ# 844408 Disk hot plug is a two-part action: the qemuMonitorAddDrive() call is followed by the qemuMonitorAddDevice() call. When the first part succeeded but the second one failed, libvirt failed to roll back the first part and the device remained in use even though the disk hot plug failed. With this update, the rollback for the drive addition is properly performed in the described scenario and disk hot plug now works as expected. BZ# 845448 Previously the SIGINT signal was not blocked when the virDomainGetBlockJobInfo() function was performed. Consequently, an attempt to abort a process initialized by a command with the --wait option specified using the CTRL+C shortcut did not work properly. This update applies a patch to block SIGINT during virDomainGetBlockJobInfo() and aborting processes using the CTRL+C shortcut now works as expected. BZ# 845635 Previously, an unspecified error with a meaningless error code was returned when a guest agent became unresponsive. Consequently, management applications could not recognize why the guest agent hung; whether the guest agent was not configured or was unusable. This update introduces a new VIR_ERR_AGENT_UNRESPONSIVE error code and fixes the error message. As a result, management applications now can recognize why the guest agent hangs. BZ# 846639 Due to a bug in the libvirt code, two mutually exclusive cases could occur. In the first case, a guest operating system could fail do detect that it was being suspended because the suspend routine is handled by hypervisor. In the second case, the cooperation of the guest operating system was required, for example during synchronization of the time after the resume routine. Consequently, it was possible to successfully call the suspend routine on a domain with the pmsuspended status and libvirt returned success on operation, which in fact failed. This update adds an additional check to prevent libvirt from suspending a domain with the pmsuspended status. BZ# 851397 Due to recent changes in port allocation, SPICE ports and SPICE TLS ports were the same. Consequently, QEMU domains started with both options configured to use the same port and SPICE TLS ports could not allocate one port twice. With this update, the port allocation has been fixed and the QEMU domains now work as expected in this situation. BZ# 853567 A virtual guest can have a network interface that is connected to an SR-IOV (Single Root I/O Virtualization) device's virtual function ( VF ) using the macvtap driver in passthrough mode, and from there is connected to an 802.1Qbh -capable switch. Previously, when shutting down the guest, libvirt erroneously set SR-IOV device's physical function ( PF ) instead of VF and the PF offline rather than setting the VF offline. Here is an example of the type of an interface that could be affected: Consequently, if PF was being used by the host for its own network connectivity, the host networking would be adversely affected, possibly completely disabled, whenever the guest was shut down, or when the guest's network device was detached. The underlying source code has been modified to fix this bug and the PF associated with the VF used by the macvtap driver now continues to work in the described scenario. BZ# 856247 Red Hat Enterprise Linux 6.3 implemented the block copy feature before the upstream version of QEMU. Since then, several improvements were made to the upstream version of this feature. Consequently, versions of the libvirt library were unable to fully manage the block copy feature in current release of QEMU. With this update, the block copy feature has been updated to upstream versions of QEMU and libvirt . As a result, libvirt is able to manage all versions of the block copy feature. BZ# 856864 Previously, libvirt put the default USB controller into the XML configuration file during the live migration to Red Hat Enterprise Linux 6.1 hosts. These hosts did not support USB controllers in the XML file. Consequently, live migration to these hosts failed. This update prevents libvirt from including the default USB controller in the XML configuration file during live migration and live migration works properly in the described scenario. BZ# 856950 When a QEMU process is being destroyed by libvirt , a clean-up operation frees some internal structures and locks. However, since users can destroy QEMU processes at the same time, libvirt holds the QEMU driver mutex to protect the list of domains and their states, among other things. Previously, a function tried to lock up the QEMU driver mutex when it was already locked, creating a deadlock. The code has been modified to always check if the mutex is free before attempting to lock it up, thus fixing this bug. BZ# 858204 When the host_uuid option was present in the libvirtd.conf file, the augeas libvirt lens was unable to parse the file. This bug has been fixed and the augeas libvirt lens now parses libvirtd.conf as expected in the described scenario. BZ#862515 Previously, handling of duplicate MAC addresses differed between live attach or detach, and persistent attach or detach of network devices. Consequently, the persistent attach-interface of a device with a MAC address that matches an existing device could fail, even though the live attach-interface of such a device succeed. This behavior was inconsistent, and sometimes led to an incorrect device being detached from the guest. With this update, libvirt has been modified to allow duplicate MAC addresses in all cases and to check a unique PCI address in order to distinguish between multiple devices with the same MAC address. BZ# 863115 Previously, libvirt called the qemu-kvm -help command every time it started a guest to learn what features were available for use in QEMU. On a machine with a number of guests, this behavior caused noticeable delays in starting all of the guests. This update modifies libvirt to store information cache about QEMU until the QEMU time stamp is changed. As a result, libvirt is faster when starting a machine with various guests. BZ# 865670 Previously, the ESX 5.1 server was not fully tested. Consequently, connecting to ESX 5.1 caused a warning to be returned. The ESX 5.1 server has been properly tested and connecting to this server now works as expected. BZ# 866369 Under certain circumstances, the iohelper process failed to write data to disk while saving a domain and kernel did not report an out-of-space error ( ENOSPC ). With this update, libvirt calls the fdatasync() function in the described scenario to force the data to be written to disk or catch a write error. As a result, if a write error occurs, it is now properly caught and reported. BZ# 866388 Certain operations in libvirt can be done only when a domain is paused to prevent data corruption. However, if a resuming operation failed, the management application was not notified since no event was sent. This update introduces the VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR event and management applications can now keep closer track of domain states and act accordingly. BZ# 866999 When libvirt could not find a suitable CPU model for a host CPU, it failed to provide the CPU topology in host capabilities even though the topology was detected correctly. Consequently, applications that work with the host CPU topology but not with the CPU model could not see the topology in host capabilities. With this update, the host capabilities XML description contains the host CPU topology even if the host CPU model is unknown. BZ# 869096 Previously, libvirt supported the emulatorpin option to set the CPU affinity for a QEMU domain process. However, this behavior overrode the CPU affinity set by the vcpu placement="auto" setting when creating a cgroup hierarchy for the domain process. This CPU affinity is set with the advisory nodeset from the numad daemon. With this update, libvirt does not allow emulatorpin option to change the CPU affinity of a domain process if the vcpu placement setting is set to auto . As a result, the numad daemon is supported as expected. BZ# 873792 The libvirt library allows users to cancel an ongoing migration. Previously, if an attempt to cancel the migration was made in the migration preparation phase, QEMU missed the request and the migration was not canceled. With this update, the virDomainAbortJob() function sets a flag when a cancel request is made and this flag is checked before the main phase of the migration starts. As a result, a migration can now be properly canceled even in the preparation phase. BZ# 874050 Certain AMD processors contain modules which are reported by the kernel as both threads and cores. Previously, the libvirt processor topology detection code was not able to detect these modules. Consequently, libvirt reported the actual number of processors twice. This bug has been fixed by reporting a topology that adds up to the total number of processors reported in the system. However, the actual topology has to be checked in the output of the virCapabilities() function. Additionally, documentation for the fallback output has been provided. Note Note that users should be instructed to use the capability output for topology detection purposes due to performance reasons. The NUMA topology has the important impact performance-wise but the physical topology can differ from that. BZ# 879780 Due to changes in the virStorageBackendLogicalCreateVol() function, the setting of the volume type was removed. Consequently, logical volumes were treated as files without any format and libvirt was unable to clone them. This update provides a patch to set the volume type and libvirt clones logical volumes as expected. BZ# 880919 When a saved file could not be opened, the virFileWrapperFdCatchError() function was called with a NULL argument. Consequently, the libvirtd daemon terminated unexpectedly due to a NULL pointer dereference. With this update, the virFileWrapperFdCatchError() function is called only when the file is open and instead of crashing, the daemon now reports an error. BZ# 884650 Whenever the virDomainGetXMLDesc() function was executed on an unresponsive domain, the call also became unresponsive. With this update, QEMU sends the BALLOON_CHANGE event when memory usage on a domain changes so that virDomainGetXMLDesc() no longer has to query an unresponsive domain. As a result, virDomainGetXMLDesc() calls no longer hang in the described scenario. Enhancements BZ#638512 This update adds support for external live snapshots of disks and RAM. BZ#693884 Previously, libvirt could apply packet filters, among others the anti-spoofing filter, to guest network connections using the nwfilter subsystem. However, these filter rules required manually entering the IP address of a guest into the guest configuration. This process was not effective when guests were acquired their IP addresses via the DHCP protocol; the network needed a manually added static host entry for each guest and the guest's network interface definition needed that same IP address to be added to its filters. This enhancement improves libvirt to automatically learn IP and MAC addresses used by a guest network connection by monitoring the connection's DHCP and ARP traffic in order to setup host-based guest-specific packet filtering rules that block traffic with incorrect IP or MAC addresses from the guests. With this new feature, nwfilter packet filters can be written to use automatically detected IP and MAC addresses, which simplifies the process of provisioning a guest. BZ# 724893 When the guest CPU definition is not supported due to the user's special configuration, an error message is returned. This enhancement improves this error message to contain flags that indicate precisely which options of the user's configuration are not supported. BZ# 771424 The Resident Set Size ( RSS ) limits control how much RAM can a process use. If a process leaks memory, the limits do not let the process influence other processes within the system. With this update, the RSS limits of a QEMU process are set by default according to how much RAM and video RAM is configured for the domain. BZ#772088 Previously, the libvirt library could create block snapshots, but could not clean them up. For a long-running guest, creating a large number of snapshots led to performance issues as the QEMU process emulator had to traverse longer chains of backing images. This enhancement improves the libvirt library to control the feature of the QEMU process emulator which is responsible for committing the changes in a snapshot image back into the backing file and the backing chain is now kept at a more manageable length. BZ# 772290 Previously, the automatically allocated ports for the SPICE and VNC protocols started on the port number 5900. With this update, the starting port for SPICE and VNC is configurable by users. BZ# 789327 The QEMU guest and the media of CD_ROM or Floppy could be suspended or resumed inside the guest directly instead of using the libvirt API. This enhancement improves the libvirt library to support three new events of the QEMU Monitor Protocol ( QMP ): the SUSPEND , WAKEUP , and DEVICE_TRAY_MOVED event. These events let a management application know that the guest status or the tray status has been changed: when the SUSPEND event is emitted, the domain status is changed to pmsuspended ; when the WAKEUP event is emitted, the domain status is changed to running ; when the DEVICE_TRAY_MOVED event is emitted for a disk device, the current tray status for the disk is reflected to the libvirt XML file, so that management applications do not start the guest with the medium inserted while the medium has been previously ejected inside the guest. BZ#804749 The QEMU process emulator now supports TSC-Deadline timer mode for guests that are running on the Intel 64 architecture. This enhancement improves the libvirt library with this feature's flag to stay synchronized with QEMU. BZ# 805071 Previously, it was impossible to move a guest's network connection to a different network without stopping the guest. In order to change the connection, the network needed to be completely detached from the guest and then re-attached after changing the configuration to specify the new connection. With this update, it is now possible to change a guest's interface definition to specify a different type of interface, and to change the network or bridge name or both, all without stopping or pausing the guest or detaching its network device. From the point of view of the guest, the network remains available during the entire transition; if the move requires a new IP address, that can be handled by changing the configuration on the guest, or by requesting that it renews its DHCP lease. BZ# 805243 When connecting to the libvirt library, certain form of authentication could be required and if so, interactive prompts were presented to the user. However, in certain cases, the interactive prompts cannot be used, for example when automating background processes. This enhancement improves libvirt to use the auth.conf file located in the USDHOME/.libvirt/ directory to supply authentication credentials for connections. As a result, these credentials are pre-populated, thus avoiding the interactive prompts. BZ#805654 This enhancement improves libvirt to support connection of virtual guest network devices to Open vSwitch bridges, which provides a more fully-featured replacement for the standard Linux Host Bridge. Among other features, Open vSwitch bridges allow setting more connections to a single bridge, transparent VLAN tagging, and better management using the Open Flow standard. As a result, libvirt is now able to use an already existing Open vSwitch bridge, either directly in the interface definition of a guest, or as a bridge in a libvirt network. Management of the bridge must be handled outside the scope of libvirt , but guest network devices can be attached and detached, and VLAN tags and interface IDs can be assigned on a per-port basis. BZ# 818996 Certain users prefer to run minimal configurations for server systems and do not need graphical or USB support. This enhancement provides a new feature that allows users to disable USB and graphic controllers in guest machines. BZ# 820808 , BZ#826325 With this enhancement, the virsh dump command is now supported for domains with passthrough devices. As a result, these domains can be dumped with an additional --memory-only option. BZ#822064 The libvirt library has already supported pinning and limiting QEMU threads associated with virtual CPUs , but other threads, such as the I/O thread, could not be pinned and limited separately. This enhancement improves libvirt to support pinning and limiting of both CPU threads and other emulator threads separately. BZ#822589 This enhancement improves the libvirt library to be able to configure Discretionary Access Control ( DAC ) for each domain, so that certain domains can access different resources. BZ#822601 Previously, only the " system instance " of the libvirtd daemon, that is the one that is running as the root user, could set up a guest network connection using a tap device and host bridge. A " session instance " , that is the one that is running as a non-root user, was only able to use QEMU's limited " user mode " networking. User mode network connection have several limitations; for example, they do not allow incoming connections, or ping in either direction, and are slower than a tap-device based network connection. With this enhancement, libvirt has been updated to support QEMU's new SUID " network helper " , so that non-privileged libvirt users are able to create guest network connections using tap devices and host bridges. Users who require this behavior need to set the interface type to bridge in the virtual machine's configuration, libvirtd then automatically notices that it is running as a non-privileged user, and notifies QEMU to set up the network connection using its " network helper " . Note This feature is only supported when the interface type is bridge , and does not work with the network interface type even if the specified network uses a bridge device. BZ#822641 Previously, core dumps for domains with a large amount of memory were unnecessarily huge. With this update, a new dumpCore option has been added to control whether guest's memory should be included in a core dump. When this option is set to off , core dumps are reduced by the size of the guest's memory. BZ# 831099 This enhancement allows the libvirt library to set the World Wide Name ( WWN ), which provides stable device paths, for IDE and SCSI disks. BZ#836462 This enhancement adds the possibility to control the advertising of S3 (Suspend-to-RAM) and S4 (Suspend-to-Disk) domain states to a guest. As a result, supported versions of QEMU can be configured to not advertise its S3 or S4 capability to a guest. BZ#838127 With this update, support for the AMD Opteron G5 processor model has been added to the libvirt library. This change allows the user to utilize the full potential of new features, such as 16c , fma , and tbm . BZ#843087 This enhancement adds support for the generation Intel Core and Intel Xeon processors to the libvirt library. The generation supports the following features: fma , pcid , movbe , fsgsbase , bmi1 , hle , avx2 , smep , bmi2 , erms , invpcid , and rtm , compared to the Intel Xeon Processor E5-XXXX and Intel Xeon Processor E5-XXXX V2 family of processors. BZ#844404 When changing the configuration of a libvirt virtual network, it was necessary to restart the network for these changes to take effect. This enhancement adds a new virsh net-update command that allows certain parts of a network configuration to be modified, and the changes to be applied immediately without requiring a restart of the network and disconnecting of guests. As a result, it is now possible to add static host entries to and remove them from a network's dhcp section; change the range of IP addresses dynamically assigned by the DHCP server; modify, add, and remove portgroup elements; and add and remove interfaces from a forward element's pool of interfaces, all without restarting the network. Refer to the virsh(1) man page for more details about the virsh net-update command. BZ#860570 With this enhancement, the virsh program supports the --help option for all its commands and displays appropriate documentation. BZ#864606 With this enhancement, the libvirt library can now control the hv_relaxed feature. This feature makes a Windows guest more tolerant to long periods of inactivity. BZ# 874171 Current release of the libvirt library added several capabilities related to snapshots. Among these was the ability to create an external snapshot, whether the domain was running or was offline. Consequently, it was also necessary to improve the user interface to support those features in the virsh program. With this update, these snapshot-related improvements were added to virsh to provide full support of these features. BZ#878578 For security reasons, certain SCSI commands were blocked in a virtual machine. This behavior was related to applications where logical unit numbers ( LUNs ) of SCSI disks were passed to trusted guests. This enhancement improves libvirt to support a new sgio attribute. Setting this attribute to unfiltered allows trusted guests to invoke all supported SCSI commands. All users of libvirt are advised to upgrade to these updated packages, which fix these issues and add these enhancements. After installing the updated packages, the libvirtd daemon must be restarted using the service libvirtd restart command for this update to take effect. 7.128.3. RHSA-2013:1272 - Important: libvirt security and bug fix update Updated libvirt packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Security Fixes CVE-2013-4311 libvirt invokes the PolicyKit pkcheck utility to handle authorization. A race condition was found in the way libvirt used this utility, allowing a local user to bypass intended PolicyKit authorizations or execute arbitrary commands with root privileges. CVE-2013-4296 Note: With this update, libvirt has been rebuilt to communicate with PolicyKit via a different API that is not vulnerable to the race condition. The polkit RHSA-2013:1270 advisory must also be installed to fix the CVE-2013-4311 issue. An invalid free flaw was found in libvirtd's remoteDispatchDomainMemoryStats function. An attacker able to establish a read-only connection to libvirtd could use this flaw to crash libvirtd. The CVE-2013-4296 issue was discovered by Daniel P. Berrange of Red Hat. Bug Fixes BZ# 984556 Prior to this update, the libvirtd daemon leaked memory in the virCgroupMoveTask() function. A fix has been provided which prevents libvirtd from incorrect management of memory allocations. BZ# 984561 Previously, the libvirtd daemon was accessing one byte before the array in the virCgroupGetValueStr() function. This bug has been fixed and libvirtd now stays within the array bounds. BZ# 984578 When migrating, libvirtd leaked the migration URI (Uniform Resource Identifier) on destination. A patch has been provided to fix this bug and the migration URI is now freed correctly. BZ# 1003934 Updating a network interface using virDomainUpdateDeviceFlags API failed when a boot order was set for that interface. The update failed even if the boot order was set in the provided device XML. The virDomainUpdateDeviceFlags API has been fixed to correctly parse the boot order specification from the provided device XML and updating network interfaces with boot orders now works as expected. Users of libvirt are advised to upgrade to these updated packages, which contain backported patches to correct these issues. After installing the updated packages, libvirtd will be restarted automatically. | [
"<interface type='direct'> <source dev='eth7' mode='passthrough'/> <virtualport type='802.1Qbh'> <parameters profileid='test'/> </virtualport> </interface>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libvirt |
Managing compliance with Enterprise Contract | Managing compliance with Enterprise Contract Red Hat Trusted Application Pipeline 1.4 Learn how Enterprise Contract enables you to better verify and govern compliance of the code you promote. Additionally, customize the sample policies to fit your corporate standards. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/managing_compliance_with_enterprise_contract/index |
Chapter 12. Provisioning [metal3.io/v1alpha1] | Chapter 12. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProvisioningSpec defines the desired state of Provisioning status object ProvisioningStatus defines the observed state of Provisioning 12.1.1. .spec Description ProvisioningSpec defines the desired state of Provisioning Type object Property Type Description bootIsoSource string BootIsoSource provides a way to set the location where the iso image to boot the nodes will be served from. By default the boot iso image is cached locally and served from the Provisioning service (Ironic) nodes using an auxiliary httpd server. If the boot iso image is already served by an httpd server, setting this option to http allows to directly provide the image from there; in this case, the network (either internal or external) where the httpd server that hosts the boot iso is needs to be accessible by the metal3 pod. disableVirtualMediaTLS boolean DisableVirtualMediaTLS turns off TLS on the virtual media server, which may be required for hardware that cannot accept HTTPS links. preProvisioningOSDownloadURLs object PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. provisioningDHCPExternal boolean ProvisioningDHCPExternal indicates whether the DHCP server for IP addresses in the provisioning DHCP range is present within the metal3 cluster or external to it. This field is being deprecated in favor of provisioningNetwork. provisioningDHCPRange string ProvisioningDHCPRange needs to be interpreted along with ProvisioningDHCPExternal. If the value of provisioningDHCPExternal is set to False, then ProvisioningDHCPRange represents the range of IP addresses that the DHCP server running within the metal3 cluster can use while provisioning baremetal servers. If the value of ProvisioningDHCPExternal is set to True, then the value of ProvisioningDHCPRange will be ignored. When the value of ProvisioningDHCPExternal is set to False, indicating an internal DHCP server and the value of ProvisioningDHCPRange is not set, then the DHCP range is taken to be the default range which goes from .10 to .100 of the ProvisioningNetworkCIDR. This is the only value in all of the Provisioning configuration that can be changed after the installer has created the CR. This value needs to be two comma sererated IP addresses within the ProvisioningNetworkCIDR where the 1st address represents the start of the range and the 2nd address represents the last usable address in the range. provisioningDNS boolean ProvisioningDNS allows sending the DNS information via DHCP on the provisionig network. It is off by default since the Provisioning service itself (Ironic) does not require DNS, but it may be useful for layered products (e.g. ZTP). provisioningIP string ProvisioningIP is the IP address assigned to the provisioningInterface of the baremetal server. This IP address should be within the provisioning subnet, and outside of the DHCP range. provisioningInterface string ProvisioningInterface is the name of the network interface on a baremetal server to the provisioning network. It can have values like eth1 or ens3. provisioningMacAddresses array (string) ProvisioningMacAddresses is a list of mac addresses of network interfaces on a baremetal server to the provisioning network. Use this instead of ProvisioningInterface to allow interfaces of different names. If not provided it will be populated by the BMH.Spec.BootMacAddress of each master. provisioningNetwork string ProvisioningNetwork provides a way to indicate the state of the underlying network configuration for the provisioning network. This field can have one of the following values - Managed - when the provisioning network is completely managed by the Baremetal IPI solution. Unmanaged - when the provsioning network is present and used but the user is responsible for managing DHCP. Virtual media provisioning is recommended but PXE is still available if required. Disabled - when the provisioning network is fully disabled. User can bring up the baremetal cluster using virtual media or assisted installation. If using metal3 for power management, BMCs must be accessible from the machine networks. User should provide two IPs on the external network that would be used for provisioning services. provisioningNetworkCIDR string ProvisioningNetworkCIDR is the network on which the baremetal nodes are provisioned. The provisioningIP and the IPs in the dhcpRange all come from within this network. When using IPv6 and in a network managed by the Baremetal IPI solution this cannot be a network larger than a /64. provisioningOSDownloadURL string ProvisioningOSDownloadURL is the location from which the OS Image used to boot baremetal host machines can be downloaded by the metal3 cluster. virtualMediaViaExternalNetwork boolean VirtualMediaViaExternalNetwork flag when set to "true" allows for workers to boot via Virtual Media and contact metal3 over the External Network. When the flag is set to "false" (which is the default), virtual media deployments can still happen based on the configuration specified in the ProvisioningNetwork i.e when in Disabled mode, over the External Network and over Provisioning Network when in Managed mode. PXE deployments will always use the Provisioning Network and will not be affected by this flag. watchAllNamespaces boolean WatchAllNamespaces provides a way to explicitly allow use of this Provisioning configuration across all Namespaces. It is an optional configuration which defaults to false and in that state will be used to provision baremetal hosts in only the openshift-machine-api namespace. When set to true, this provisioning configuration would be used for baremetal hosts across all namespaces. 12.1.2. .spec.preProvisioningOSDownloadURLs Description PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. Type object Property Type Description initramfsURL string InitramfsURL Image URL to be used for PXE deployments isoURL string IsoURL Image URL to be used for Live ISO deployments kernelURL string KernelURL is an Image URL to be used for PXE deployments rootfsURL string RootfsURL Image URL to be used for PXE deployments 12.1.3. .status Description ProvisioningStatus defines the observed state of Provisioning Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.4. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.5. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 12.1.6. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.7. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/provisionings DELETE : delete collection of Provisioning GET : list objects of kind Provisioning POST : create a Provisioning /apis/metal3.io/v1alpha1/provisionings/{name} DELETE : delete a Provisioning GET : read the specified Provisioning PATCH : partially update the specified Provisioning PUT : replace the specified Provisioning /apis/metal3.io/v1alpha1/provisionings/{name}/status GET : read status of the specified Provisioning PATCH : partially update status of the specified Provisioning PUT : replace status of the specified Provisioning 12.2.1. /apis/metal3.io/v1alpha1/provisionings HTTP method DELETE Description delete collection of Provisioning Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Provisioning Table 12.2. HTTP responses HTTP code Reponse body 200 - OK ProvisioningList schema 401 - Unauthorized Empty HTTP method POST Description create a Provisioning Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body Provisioning schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 202 - Accepted Provisioning schema 401 - Unauthorized Empty 12.2.2. /apis/metal3.io/v1alpha1/provisionings/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method DELETE Description delete a Provisioning Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Provisioning Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Provisioning Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Provisioning Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body Provisioning schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty 12.2.3. /apis/metal3.io/v1alpha1/provisionings/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method GET Description read status of the specified Provisioning Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Provisioning Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Provisioning Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Provisioning schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/provisioning-metal3-io-v1alpha1 |
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] | Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this token. codeChallenge string CodeChallenge is the optional code_challenge associated with this authorization code, as described in rfc7636 codeChallengeMethod string CodeChallengeMethod is the optional code_challenge_method associated with this authorization code, as described in rfc7636 expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. scopes array (string) Scopes is an array of the requested scopes. state string State data from request userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token. UserUID and UserName must both match for this token to be valid. 3.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthauthorizetokens DELETE : delete collection of OAuthAuthorizeToken GET : list or watch objects of kind OAuthAuthorizeToken POST : create an OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens GET : watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} DELETE : delete an OAuthAuthorizeToken GET : read the specified OAuthAuthorizeToken PATCH : partially update the specified OAuthAuthorizeToken PUT : replace the specified OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} GET : watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/oauth.openshift.io/v1/oauthauthorizetokens HTTP method DELETE Description delete collection of OAuthAuthorizeToken Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status_v6 schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAuthorizeToken Table 3.3. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAuthorizeToken Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.2. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens HTTP method GET Description watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method DELETE Description delete an OAuthAuthorizeToken Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAuthorizeToken Table 3.11. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAuthorizeToken Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAuthorizeToken Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.4. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method GET Description watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/oauth_apis/oauthauthorizetoken-oauth-openshift-io-v1 |
5.5. Additional Resources | 5.5. Additional Resources Below are additional sources of information about proc file system. 5.5.1. Installed Documentation Below is a list of directories you can consult for more information about the proc file system. These documents are installed through the kernel-doc package. /usr/share/doc/kernel-doc- <version> /Documentation/filesystems/proc.txt - Contains assorted, but limited, information about all aspects of the /proc/ directory. /usr/share/doc/kernel-doc- <version> /Documentation/sysrq.txt - An overview of System Request Key options. /usr/share/doc/kernel-doc- <version> /Documentation/sysctl/ - A directory containing a variety of sysctl tips, including modifying values that concern the kernel ( kernel.txt ), accessing file systems ( fs.txt ), and virtual memory use ( vm.txt ). /usr/share/doc/kernel-doc- <version> /Documentation/networking/ip-sysctl.txt - A detailed overview of IP networking options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-proc-additional-resources |
function::ns_ppid | function::ns_ppid Name function::ns_ppid - Returns the process ID of a target process's parent process as seen in a pid namespace Synopsis Arguments None Description This function return the process ID of the target proccess's parent process as seen in the target pid namespace if provided, or the stap process namespace. | [
"ns_ppid:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ns-ppid |
8.180. rhn-client-tools | 8.180. rhn-client-tools 8.180.1. RHBA-2013:1702 - rhn-client-tools bug fix update Updated rhn-client-tools packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Network Client Tools provide programs and libraries that allow the system to receive software updates from Red Hat Network (RHN). Bug Fixes BZ# 891746 Previously, the rhn-channel manual page incorrectly referred to the "--username" option instead of "--user". This mistake has been corrected and the rhn-channel manual page now correctly refers to the "--user" option. BZ# 912984 Prior to this update, some messages written in English occurred in the Japanese installation of Red Hat Enterprise Linux 6.4. The untranslated strings have been translated, and the messages are now shown in the correct language. BZ# 983999 Previously, the rhn-client-tools code called by the sosreport utility on Red Hat Enterprise Linux 6 terminated unexpectedly with a traceback. The bug has been fixed and and the information about hardware is now correctly gathered by sosreport. BZ# 994531 Previously, a machine with many CPUs could report large value for idle time for all its processors. Consequently, the idle time value did not fit into XML-RPC's integer limits and running the rhn_check command on a problematic machine resulted in a traceback error. The bug has been fixed and rhn_check in the problematic scenarios works now correctly. BZ# 997637 Previously, the rhn-profile-sync utility terminated unexpectedly with a traceback when an older version of the rhn-virtualization-host package was installed on the machine. The bug has been fixed by requiring a newer version of rhn-virtualization-host. Users of rhn-client-tools are advised to upgrade to these updated packages, which fix these bugs. 8.180.2. RHBA-2013:1087 - rhn-client-tools bug fix and enhancement update Updated rhn-client-tools packages that fix one bug and add one enhancement are now available. Red Hat Network Client Tools provide programs and libraries that allow systems to receive software updates from Red Hat Network (RHN). Bug Fix BZ#949648 The RHN Proxy did not work properly if separated from a parent by a slow enough network. Consequently, users who attempted to download larger repodata files and RPMs experienced timeouts. This update changes both RHN Proxy and Red Hat Enterprise Linux RHN Client to allow all communications to obey a configured timeout value for connections. Enhancement BZ# 949640 While Satellite 5.3.0 now has the ability to get the number of CPUs via an API call, there was no function to obtain the number of sockets from the registered systems. This update adds a function to get the number of physical CPU sockets in a managed system from Satellite via an API call. Users of rhn-client-tools are advised to upgrade to these updated packages, which fix this bug and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhn-client-tools |
Chapter 5. General Updates | Chapter 5. General Updates New package: redhat-access-insights Red Hat Access Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators. The service is hosted and delivered through the customer portal at https://access.redhat.com/insights/ or via Red Hat Satellite. To register your systems, please follow the latest Getting Started Guide for Insights, which is available at: https://access.redhat.com/insights/getting-started/ . redhat-release-server includes a fallback product certificate In some scenarios, it is possible to install Red Hat Enterprise Linux without a corresponding product certificate. To ensure that a product certificate is always present for registration, a fallback certificate is now delivered with redhat-release-server . Increased gPXE retry timeout values This update increases the retry timeout values used by gPXE to conform to RFC 2131 and the PXE specification. The total timeout is now 60 seconds. Enhanced maintainability for Linux IPL code A new version of the zipl boot loader makes inclusion of bug fixes and new features in the boot loader easier. Improved performance of the dasdfmt utility The kernel internal handling of format requests has been reorganized and the usage of the PAV feature is now enabled to accelerate format requests. This feature speeds up formatting of large DASDs in use today and prepares for even larger DASDs that are expected to come in the future. lscss supports verified path masks The lscss utility on IBM System z, which gathers and displays subchannel information from sysfs , now displays a verified path mask when listing I/O devices. wireshark supports reading from stdin Previously when using process substitution with large files as input wireshark would fail to properly decode such input; as of the latest version wireshark now successfully reads these files. Boot menu in seabios accessible with Esc key The boot menu in seabios is now accessible by pressing the Esc key. This makes the boot menu accessible on systems such as OS X which may intercept certain functions keys, including F12 which was used previously, and use them for other functions. wireshark supports nanosecond precision Previously wireshark only included microseconds in the pcapng format; however, as of the latest version wireshark now supports nanosecond precision to allow for more accurate timestamps. lsdasd supports detailed path information for DASDs The lsdasd utility, which is used to gather and display information about DASD devices on IBM System z, now shows detailed path information such as installed and in-use paths. lsqeth now displays switch port attributes The lsqeth tool, which is used on IBM System z to list qeth-based network device parameters, now includes switch port attributes (displayed as switch_attrs ) in its output. fdasd supports GPFS partitions The fdasd utility, which is used to manage disk partitions on ECKD DASDs on IBM System z, now recognizes GPFS as a supported partition type. ppc64-diag rebase to version 2.6.7 The ppc64-diag packages have been upgraded to upstream version 2.6.7, which provides a number of bug fixes and enhancements over the version. Support for OpenJDK 8 added to JPackage Utilities OpenJDK 8 was added to RHEL 6.6 but system Java applications were not able to be run with it due to lack of OpenJDK 8 support in the jpackage-utils package. This has been resolved, and the RHEL 6.7 jpackage-utils package includes support for system applications to be run with OpenJDK 8. preupgrade-assistant supports different modes for upgrading and migrating To support the different operating modes of the preupg command, additional options are now available in the configuration files. This enables the tool to return only the required data for the operating mode selected. Currently only upgrade mode is supported. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/general_updates |
16.2. Common SSM Tasks | 16.2. Common SSM Tasks The following sections describe common SSM tasks. 16.2.1. Installing SSM To install SSM use the following command: There are several back ends that are enabled only if the supporting packages are installed: The LVM back end requires the lvm2 package. The Btrfs back end requires the btrfs-progs package. The Crypt back end requires the device-mapper and cryptsetup packages. 16.2.2. Displaying Information about All Detected Devices Displaying information about all detected devices, pools, volumes, and snapshots is done with the list command. The ssm list command with no options display the following output: This display can be further narrowed down by using arguments to specify what should be displayed. The list of available options can be found with the ssm list --help command. Note Depending on the argument given, SSM may not display everything. Running the devices or dev argument omits some devices. CDRoms and DM/MD devices, for example, are intentionally hidden as they are listed as volumes. Some back ends do not support snapshots and cannot distinguish between a snapshot and a regular volume. Running the snapshot argument on one of these back ends cause SSM to attempt to recognize the volume name in order to identify a snapshot. If the SSM regular expression does not match the snapshot pattern then the snapshot is not be recognized. With the exception of the main Btrfs volume (the file system itself), any unmounted Btrfs volumes are not shown. 16.2.3. Creating a New Pool, Logical Volume, and File System In this section, a new pool is being created with a default name which have the devices /dev/vdb and /dev/vdc , a logical volume of 1G, and an XFS file system. The command to create this scenario is ssm create --fs xfs -s 1G /dev/vdb /dev/vdc . The following options are used: The --fs option specifies the required file system type. Current supported file system types are: ext3 ext4 xfs btrfs The -s specifies the size of the logical volume. The following suffixes are supported to define units: K or k for kilobytes M or m for megabytes G or g for gigabytes T or t for terabytes P or p for petabytes E or e for exabytes Additionaly, with the -s option, the new size can be specified as a percentage. Look at the examples: 10% for 10 percent of the total pool size 10%FREE for 10 percent of the free pool space 10%USED for 10 percent of the used pool space The two listed devices, /dev/vdb and /dev/vdc , are the two devices you wish to create. There are two other options for the ssm command that may be useful. The first is the -p pool command. This specifies the pool the volume is to be created on. If it does not yet exist, then SSM creates it. This was omitted in the given example which caused SSM to use the default name lvm_pool . However, to use a specific name to fit in with any existing naming conventions, the -p option should be used. The second useful option is the -n name command. This names the newly created logical volume. As with the -p , this is needed in order to use a specific name to fit in with any existing naming conventions. An example of these two options being used follows: SSM has now created two physical volumes, a pool, and a logical volume with the ease of only one command. 16.2.4. Checking a File System's Consistency The ssm check command checks the file system consistency on the volume. It is possible to specify multiple volumes to check. If there is no file system on the volume, then the volume is skipped. To check all devices in the volume lvol001 , run the command ssm check /dev/lvm_pool/lvol001 . 16.2.5. Increasing a Volume's Size The ssm resize command changes the size of the specified volume and file system. If there is no file system then only the volume itself will be resized. For this example, we currently have one logical volume on /dev/vdb that is 900MB called lvol001 . The logical volume needs to be increased by another 500MB. To do so we will need to add an extra device to the pool: SSM runs a check on the device and then extends the volume by the specified amount. This can be verified with the ssm list command. Note It is only possible to decrease an LVM volume's size; it is not supported with other volume types. This is done by using a - instead of a + . For example, to decrease the size of an LVM volume by 50M the command would be: Without either the + or - , the value is taken as absolute. 16.2.6. Snapshot To take a snapshot of an existing volume, use the ssm snapshot command. Note This operation fails if the back end that the volume belongs to does not support snapshotting. To create a snapshot of the lvol001 , use the following command: To verify this, use the ssm list , and note the extra snapshot section. 16.2.7. Removing an Item The ssm remove is used to remove an item, either a device, pool, or volume. Note If a device is being used by a pool when removed, it will fail. This can be forced using the -f argument. If the volume is mounted when removed, it will fail. Unlike the device, it cannot be forced with the -f argument. To remove the lvm_pool and everything within it use the following command: | [
"yum install system-storage-manager",
"ssm list ---------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------- /dev/sda 2.00 GB PARTITIONED /dev/sda1 47.83 MB /test /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel ---------------------------------------------------------- ------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------ rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------ --------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point --------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part /test /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------",
"ssm create --fs xfs -s 1G /dev/vdb /dev/vdc Physical volume \"/dev/vdb\" successfully created Physical volume \"/dev/vdc\" successfully created Volume group \"lvm_pool\" successfully created Logical volume \"lvol001\" created",
"ssm create --fs xfs -p new_pool -n XFS_Volume /dev/vdd Volume group \"new_pool\" successfully created Logical volume \"XFS_Volume\" created",
"ssm check /dev/lvm_pool/lvol001 Checking xfs file system on '/dev/mapper/lvm_pool-lvol001'. Phase 1 - find and verify superblock Phase 2 - using internal log - scan filesystem freespace and inode maps - found root inode chunk Phase 3 - for each AG - scan (but don't clear) agi unlinked lists - process known inodes and perform inode discovery - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - process newly discovered inodes Phase 4 - check for duplicate blocks - setting up duplicate extent list - check for inodes claiming duplicate blocks - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity - traversing filesystem - traversal finished - moving disconnected inodes to lost+found Phase 7 - verify link counts No modify flag set, skipping filesystem flush and exiting.",
"ssm list ----------------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ----------------------------------------------------------------- --------------------------------------------------------- Pool Type Devices Free Used Total --------------------------------------------------------- lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB --------------------------------------------------------- -------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot --------------------------------------------------------------------------------------------",
"~]# ssm resize -s +500M /dev/lvm_pool/lvol001 /dev/vdc Physical volume \"/dev/vdc\" successfully created Volume group \"lvm_pool\" successfully extended Phase 1 - find and verify superblock Phase 2 - using internal log - scan filesystem freespace and inode maps - found root inode chunk Phase 3 - for each AG - scan (but don't clear) agi unlinked lists - process known inodes and perform inode discovery - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes Phase 4 - check for duplicate blocks - setting up duplicate extent list - check for inodes claiming duplicate blocks - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity - traversing filesystem - traversal finished - moving disconnected inodes to lost+found Phase 7 - verify link counts No modify flag set, skipping filesystem flush and exiting. Extending logical volume lvol001 to 1.37 GiB Logical volume lvol001 successfully resized meta-data=/dev/mapper/lvm_pool-lvol001 isize=256 agcount=4, agsize=57600 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 data = bsize=4096 blocks=230400, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 230400 to 358400",
"ssm list ------------------------------------------------------------------ Device Free Used Total Pool Mount point ------------------------------------------------------------------ /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool ------------------------------------------------------------------ ------------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------------ lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------------ ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ----------------------------------------------------------------------------------------------",
"ssm resize -s-50M /dev/lvm_pool/lvol002 Rounding size to boundary between physical extents: 972.00 MiB WARNING: Reducing active logical volume to 972.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvol002? [y/n]: y Reducing logical volume lvol002 to 972.00 MiB Logical volume lvol002 successfully resized",
"ssm snapshot /dev/lvm_pool/lvol001 Logical volume \"snap20150519T130900\" created",
"ssm list ---------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ---------------------------------------------------------------- -------------------------------------------------------- Pool Type Devices Free Used Total -------------------------------------------------------- lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB -------------------------------------------------------- ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- Snapshot Origin Pool Volume size Size Type ---------------------------------------------------------------------------------- /dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB linear ----------------------------------------------------------------------------------",
"ssm remove lvm_pool Do you really want to remove volume group \"lvm_pool\" containing 2 logical volumes? [y/n]: y Do you really want to remove active logical volume snap20150519T130900? [y/n]: y Logical volume \"snap20150519T130900\" successfully removed Do you really want to remove active logical volume lvol001? [y/n]: y Logical volume \"lvol001\" successfully removed Volume group \"lvm_pool\" successfully removed"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ssm-common-tasks |
4.5.6. Logging Configuration | 4.5.6. Logging Configuration Clicking on the Logging tab displays the Logging Configuration page, which provides an interface for configuring logging settings. You can configure the following settings for global logging configuration: Checking Log Debugging Messages enables debugging messages in the log file. Checking Log Messages to Syslog enables messages to syslog . You can select the Syslog Message Facility and the Syslog Message Priority . The Syslog Message Priority setting indicates that messages at the selected level and higher are sent to syslog . Checking Log Messages to Log File enables messages to the log file. You can specify the Log File Path name. The logfile message priority setting indicates that messages at the selected level and higher are written to the log file. You can override the global logging settings for specific daemons by selecting one of the daemons listed beneath the Daemon-specific Logging Overrides heading at the bottom of the Logging Configuration page. After selecting the daemon, you can check whether to log the debugging messages for that particular daemon. You can also specify the syslog and log file settings for that daemon. Click Apply for the logging configuration changes you have specified to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-logging-conga-ca |
Chapter 4. Authoring devfiles | Chapter 4. Authoring devfiles Section 4.1, "Authoring devfiles version 1" Section 4.2, "Authoring a devfile 2" 4.1. Authoring devfiles version 1 This section explains the concept of a devfile and how to configure a CodeReady Workspaces workspace by using a devfile of the 1.0 specification. 4.1.1. What is a devfile A devfile is a file that describes and define a development environment: The source code. The development components, such as browser IDE tools and application runtimes. A list of pre-defined commands. Projects to clone. A devfiles is a YAML file that CodeReady Workspaces consumes and transforms into a cloud workspace composed of multiple containers. It is possible to store a devfile remotely or locally, in any number of ways, such as: In a Git repository, in the root folder, or on a feature branch. On a publicly accessible web server, accessible through HTTP. Locally as a file, and deployed using crwctl . In a collection of devfiles, known as a devfile registry . When creating a workspace, CodeReady Workspaces uses that definition to initiate everything and run all the containers for the required tools and application runtimes. CodeReady Workspaces also mounts file-system volumes to make source code available to the workspace. Devfiles can be versioned with the project source code. When there is a need for a workspace to fix an old maintenance branch, the project devfile provides a definition of the workspace with the tools and the exact dependencies to start working on the old branch. Use it to instantiate workspaces on demand. CodeReady Workspaces maintains the devfile up-to-date with the tools used in the workspace: Elements of the project, such as the path, Git location, or branch. Commands to perform daily tasks such as build, run, test, and debug. The runtime environment with its container images needed for the application to run. Che-Theia plug-ins with tools, IDE features, and helpers that a developer would use in the workspace, for example, Git, Java support, SonarLint, and Pull Request. 4.1.2. A minimal devfile The following is the minimum content required in a devfile: apiVersion metadata name apiVersion: 1.0.0 metadata: name: crw-in-crw-out For a complete devfile example, see Red Hat CodeReady Workspaces in CodeReady Workspaces devfile.yaml . Note A choice of use of the parameter generateName or name is optional, but only one of these parameters has to be chosen by a user and defined. When both attributes are specified, generateName is ignored. See Section 4.1.3, "Generating workspace names" . metadata: generatedName: or metadata: name: 4.1.3. Generating workspace names To specify a prefix for automatically generated workspace names, set the generateName parameter in the devfile: apiVersion: 1.0.0 metadata: generateName: crw- The workspace name will be in the <generateName>YYYYY format (for example, che-2y7kp ). Y is random [a-z0-9] character. The following naming rules apply when creating workspaces: When name is defined, it is used as the workspace name: <name> When only generateName is defined, it is used as the base of the generated name: <generateName>YYYYY Note For workspaces created using a factory, defining name or generateName has the same effect. The defined value is used as the name prefix: <name>YYYYY or <generateName>YYYYY . When both generateName and name are defined, generateName takes precedence. 4.1.4. Writing a devfile for a project This section describes how to create a minimal devfile for your project and how to include more than one projects in a devfile. 4.1.4.1. Preparing a minimal devfile A minimal devfile sufficient to run a workspace consists of the following parts: Specification version Name Example of a minimal devfile with no project apiVersion: 1.0.0 metadata: name: minimal-workspace Without any further configuration, a workspace with the default editor is launched along with its default plug-ins, which are configured on the CodeReady Workspaces Server. Che-Theia is configured as the default editor along with the CodeReady Workspaces Machine Exec plug-in. When launching a workspace within a Git repository using a factory, the project from the given repository and branch is be created by default. The project name then matches the repository name. Add the following parts for a more functional workspace: List of components: Development components and user runtimes List of projects: Source code repositories List of commands: Actions to manage the workspace components, such as running the development tools, starting the runtime environments, and others Example of a minimal devfile with a project apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/spring-projects/spring-petclinic.git' components: - type: chePlugin id: redhat/java/latest 4.1.4.2. Specifying multiple projects in a devfile A single devfile can define multiple projects, which are cloned to the desired destination. These projects are created inside a user's workspace after the workspace is started. For each project, specify the following: The type of the source repository - this can be git or zip . For additional information, see the Devfile reference section. The location of the source repository - an URL to a Git repository or zip archive. Optionally, the directory to which the project is cloned. If none is specified, the default directory is used, which is a directory that matches the project name or project Git repository. Example of a devfile with two projects In the following example, the projects frontend and backend act as examples of a user's projects. Each project is located in a separate repository. The backend project has a specific requirement to be cloned into the src/github.com/ <github-organization> / <backend> / directory under the source root, implicitly defined by the CodeReady Workspaces runtime. The frontend project will be cloned into the <frontend/> directory under the source root. apiVersion: 1.0.0 metadata: name: example-devfile projects: - name: <frontend> source: type: git location: https://github.com/ <github-organization> / <frontend> .git - name: <backend> clonePath: src/github.com/ <github-organization> / <backend> source: type: git location: https://github.com/ <github-organization> / <backend> .git Additional resources For a detailed explanation of all devfile component assignments and possible values, see: Specification repository Detailed json-schema documentation These sample devfiles are a good source of inspiration: Sample devfiles for Red Hat CodeReady Workspaces workspaces used by default in the user interface . Sample devfiles for Red Hat CodeReady Workspaces workspaces from Red Hat Developer program . 4.1.5. Devfile reference This section contains devfile reference and instructions on how to use the various elements that devfiles consist of. 4.1.5.1. Adding schema version to a devfile Procedure Define the schemaVersion attribute in the devfile: Example 4.1. Adding schema version to a devfile schemaVersion: 1.0.0 4.1.5.2. Adding a name to a devfile Adding a name to a devfile is mandatory. Both name and generateName are optional attributes, but at least one of them must be defined. Procedure To specify a static name for the workspace, define the name attribute. Adding a static name to a devfile schemaVersion: 1.0.0 metadata: name: devfile-sample To specify a prefix for automatically generated workspace names, define the generateName attribute and do not define the name attribute. The workspace name will be in the <generateName>YYYYY format, for example, devfile-sample-2y7kp , where Y is a random [a-z0-9] character. Adding a generated name to a devfile schemaVersion: 1.0.0 metadata: generateName: devfile-sample- Note For workspaces created using a factory, defining name or generateName has the same effect. The defined value is used as the name prefix: <name>YYYYY or <generateName>YYYYY . When both generateName and name are defined, generateName takes precedence. 4.1.5.3. Adding projects to a devfile A devfile is designed to contain one or more projects. A workspace is created to develop those projects. Projects are added in the projects section of devfiles. Each project in a single devfile must have: Unique name Source specified Project source consists of two mandatory values: type and location . type The kind of project-source provider. location The URL of project source. CodeReady Workspaces supports the following project types: git Projects with sources in Git. The location points to a clone link. github Same as git but for projects hosted on GitHub only. Use git for projects that do not use GitHub-specific features. zip Projects with sources in a zip archive. Location points to a zip file. 4.1.5.3.1. Project-source type: git source: type: git location: https://github.com/eclipse-che/che-server.git startPoint: main 1 tag: 7.34.0 commitId: 36fe587 branch: 7.34.x sparseCheckoutDir: core 2 1 startPoint : The general value for tag , commitId , and branch . The startPoint , tag , commitId , and branch parameters are mutually exclusive. When more than one is supplied, the following order is used: startPoint , tag , commitId , branch . 2 sparseCheckoutDir : The template for the sparse checkout Git feature. This is useful when only a part of a project, typically a single directory, is needed. Example 4.2. sparseCheckoutDir parameter settings Set to /my-module/ to create only the root my-module directory (and its content). Omit the leading slash ( my-module/ ) to create all my-module directories that exist in the project. Including, for example, /addons/my-module/ . The trailing slash indicates that only directories with the given name (including their content) are created. Use wildcards to specify more than one directory name. For example, setting module-* checks out all directories of the given project that start with module- . For more information, see Sparse checkout in Git documentation . 4.1.5.3.2. Project-source type: zip source: type: zip location: http://host.net/path/project-src.zip 4.1.5.3.3. Project clone-path parameter: clonePath The clonePath parameter specifies the path into which the project is to be cloned. The path must be relative to the /projects/ directory, and it cannot leave the /projects/ directory. The default value is the project name. Example devfile with projects apiVersion: 1.0.0 metadata: name: my-project-dev projects: - name: my-project-resourse clonePath: resources/my-project source: type: zip location: http://host.net/path/project-res.zip - name: my-project source: type: git location: https://github.com/my-org/project.git branch: develop 4.1.5.4. Adding components to a devfile Each component in a single devfile must have a unique name. 4.1.5.4.1. Component type: cheEditor Describes the editor used in the workspace by defining its id . A devfile can only contain one component of the cheEditor type. components: - alias: theia-editor type: cheEditor id: eclipse/che-theia/ When cheEditor is missing, a default editor is provided along with its default plug-ins. The default plug-ins are also provided for an explicitly defined editor with the same id as the default one (even if it is a different version). Che-Theia is configured as default editor along with the CodeReady Workspaces Machine Exec plug-in. To specify that a workspace requires no editor, use the editorFree:true attribute in the devfile attributes. 4.1.5.4.2. Component type: chePlugin Describes plug-ins in a workspace by defining their id . A devfile is allowed to have multiple chePlugin components. components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest Both types above use an ID, which is slash-separated publisher, name and version of plug-in from the CodeReady Workspaces Plug-in registry. Note that the CodeReady Workspaces Plug-in registry uses the latest version by default for all plug-ins. To reference a custom plug-in by ID, build and deploy a custom plug-in registry. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#building-custom-registry-images.adoc . 4.1.5.4.3. Specifying an alternative component registry To specify an alternative registry for the cheEditor and chePlugin component types, use the registryUrl parameter: components: - alias: exec-plugin type: chePlugin registryUrl: https://my-customregistry.com id: eclipse/che-machine-exec-plugin/latest 4.1.5.4.4. Specifying a component by linking to its descriptor Rather than using the editor or plug-in id to specify cheEditor or chePlugin , provide a direct link to the component descriptor, typically named as meta.yaml , using the reference field: components: - alias: exec-plugin type: chePlugin reference: https://raw.githubusercontent.com.../plugin/1.0.1/meta.yaml The URL in the reference field must be publicly accessible and should directly point to a fetchable meta.yaml file. URLs that redirect or do not directly point to a meta.yaml file will cause the workspace startup to fail. To learn more about publishing meta.yaml files, see Section 5.4, "Publishing metadata for a Visual Studio Code extension" . Note It is impossible to mix the id and reference fields in a single component definition; they are mutually exclusive. 4.1.5.4.5. Tuning chePlugin component configuration A chePlugin component may need to be precisely tuned, and in such case, component preferences can be used. The example shows how to configure JVM using plug-in preferences. id: redhat/java/latest type: chePlugin preferences: java.jdt.ls.vmargs: '-noverify -Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication' Preferences may also be specified as an array: id: redhat/java/latest type: chePlugin preferences: go.lintFlags: ["--enable-all", "--new"] 4.1.5.4.6. Component type: kubernetes A complex component type that allows to apply configuration from a list of OpenShift components. The content can be provided through the reference attribute, which points to the file with the component content. components: - alias: mysql type: kubernetes reference: petclinic.yaml selector: app.kubernetes.io/name: mysql app.kubernetes.io/component: database app.kubernetes.io/part-of: petclinic Alternatively, to post a devfile with such components to REST API, the contents of the OpenShift List object can be embedded into the devfile using the referenceContent field: components: - alias: mysql type: kubernetes reference: petclinic.yaml referenceContent: | kind: List items: - apiVersion: v1 kind: Pod metadata: name: ws spec: containers: ... etc 4.1.5.4.7. Overriding container entrypoints As with the understood by OpenShift). There can be more containers in the list (contained in Pods or Pod templates of deployments). To select which containers to apply the entrypoint changes to. The entrypoints can be defined as follows: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml entrypoints: - parentName: mysqlServer command: ['sleep'] args: ['infinity'] - parentSelector: app: prometheus args: ['-f', '/opt/app/prometheus-config.yaml'] The entrypoints list contains constraints for picking the containers along with the command and args parameters to apply to them. In the example above, the constraint is parentName: mysqlServer , which will cause the command to be applied to all containers defined in any parent object called mysqlServer . The parent object is assumed to be a top level object in the list defined in the referenced file, which is app-deployment.yaml in the example above. Other types of constraints (and their combinations) are possible: containerName the name of the container parentName the name of the parent object that (indirectly) contains the containers to override parentSelector the set of labels the parent object needs to have A combination of these constraints can be used to precisely locate the containers inside the referenced OpenShift List . 4.1.5.4.8. Overriding container environment variables To provision or override entrypoints in a OpenShift component, configure it in the following way: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml env: - name: ENV_VAR value: value This is useful for temporary content or without access to editing the referenced content. The specified environment variables are provisioned into each init container and containers inside all Pods and Deployments. 4.1.5.4.9. Specifying mount-source option To specify a project sources directory mount into container(s), use the mountSources parameter: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml mountSources: true If enabled, project sources mounts will be applied to every container of the given component. This parameter is also applicable for chePlugin type components. 4.1.5.4.10. Component type: dockerimage A component type that allows to define a container image-based configuration of a container in a workspace. The dockerimage type of component brings in custom tools into the workspace. The component is identified by its image. components: - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly volumes: - name: mavenrepo containerPath: /root/.m2 env: - name: ENV_VAR value: value endpoints: - name: maven-server port: 3101 attributes: protocol: http secure: 'true' public: 'true' discoverable: 'false' memoryLimit: 1536M memoryRequest: 256M command: ['tail'] args: ['-f', '/dev/null'] Example of a minimal dockerimage component apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity'] It specifies the type of the component, dockerimage and the image attribute names the image to be used for the component using the usual Docker naming conventions, that is, the above type attribute is equal to docker.io/library/golang:latest . A dockerimage component has many features that enable augmenting the image with additional resources and information needed for meaningful integration of the tool provided by the image with Red Hat CodeReady Workspaces. 4.1.5.4.11. Mounting project sources For the dockerimage component to have access to the project sources, you must set the mountSources attribute to true . apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity'] The sources is mounted on a location stored in the CHE_PROJECTS_ROOT environment variable that is made available in the running container of the image. This location defaults to /projects . 4.1.5.4.12. Container entrypoint The command attribute of the dockerimage along with other arguments, is used to modify the entrypoint command of the container created from the image. In Red Hat CodeReady Workspaces the container is needed to run indefinitely so that you can connect to it and execute arbitrary commands in it at any time. Because the availability of the sleep command and the support for the infinity argument for it is different and depends on the base image used in the particular images, CodeReady Workspaces cannot insert this behavior automatically on its own. However, you can take advantage of this feature to, for example, start necessary servers with modified configurations, and so on. 4.1.5.4.13. Persistent Storage Components of any type can specify the custom volumes to be mounted on specific locations within the image. Note that the volume names are shared across all components and therefore this mechanism can also be used to share file systems between components. Example specifying volumes for dockerimage type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumes: - name: cache containerPath: /.cache Example specifying volumes for cheEditor / chePlugin type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/ env: - name: HOME value: USD(CHE_PROJECTS_ROOT) volumes: - name: cache containerPath: /.cache Example specifying volumes for kubernetes / openshift type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: openshift alias: mongo reference: mongo-db.yaml volumes: - name: mongo-persistent-storage containerPath: /data/db 4.1.5.4.14. Specifying container memory limit for components To specify a container(s) memory limit for dockerimage , chePlugin or cheEditor , use the memoryLimit parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest memoryLimit: 1Gi - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512M This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, RAM limits can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults will be applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__MEMORY__LIMIT__MB system property). 4.1.5.4.15. Specifying container memory request for components To specify a container(s) memory request for dockerimage , chePlugin or cheEditor , use the memoryRequest parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest memoryLimit: 1Gi memoryRequest: 512M - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512M memoryRequest: 256M This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, RAM requests can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__MEMORY__REQUEST__MB system property). 4.1.5.4.16. Specifying container CPU limit for components To specify a container(s) CPU limit for chePlugin , cheEditor or dockerimage use the cpuLimit parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest cpuLimit: 1.5 - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly cpuLimit: 750m This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, CPU limits can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__CPU__LIMIT__CORES system property). 4.1.5.4.17. Specifying container CPU request for components To specify a container(s) CPU request for chePlugin , cheEditor or dockerimage use the cpuRequest parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest cpuLimit: 1.5 cpuRequest: 0.225 - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly cpuLimit: 750m cpuRequest: 450m This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, CPU requests can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__CPU__REQUEST__CORES system property). 4.1.5.4.18. Environment variables Red Hat CodeReady Workspaces allows you to configure Docker containers by modifying the environment variables available in component's configuration. Environment variables are supported by the following component types: dockerimage , chePlugin , cheEditor , kubernetes , openshift . In case component has multiple containers, environment variables will be provisioned to each container. apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - type: cheEditor alias: theia-editor id: eclipse/che-theia/ memoryLimit: 2Gi env: - name: HOME value: USD(CHE_PROJECTS_ROOT) Note The variable expansion works between the environment variables, and it uses the Kubernetes convention for the variable references. The predefined variables are available for use in custom definitions. The following environment variables are pre-set by the CodeReady Workspaces server: CHE_PROJECTS_ROOT : The location of the projects directory (note that if the component does not mount the sources, the projects will not be accessible). CHE_WORKSPACE_LOGS_ROOT__DIR : The location of the logs common to all the components. If the component chooses to put logs into this directory, the log files are accessible from all other components. CHE_API_INTERNAL : The URL to the CodeReady Workspaces server API endpoint used for communication with the CodeReady Workspaces server. CHE_WORKSPACE_ID : The ID of the current workspace. CHE_WORKSPACE_NAME : The name of the current workspace. CHE_WORKSPACE_NAMESPACE : The CodeReady Workspaces project of the current workspace. This environment variable is the name of the user or organization that the workspace belongs to. Note that this is different from the OpenShift project to which the workspace is deployed. CHE_MACHINE_TOKEN : The token used to authenticate the request against the CodeReady Workspaces server. CHE_MACHINE_AUTH_SIGNATURE__PUBLIC__KEY : The public key used to secure the communication with the CodeReady Workspaces server. CHE_MACHINE_AUTH_SIGNATURE__ALGORITHM : The encryption algorithm used in the secured communication with the CodeReady Workspaces server. A devfile might need the CHE_PROJECTS_ROOT environment variable to locate the cloned projects in the component's container. More advanced devfiles might use the CHE_WORKSPACE_LOGS_ROOT__DIR environment variable to read the logs. The environment variables for securely accessing the CodeReady Workspaces server are out of scope for devfiles. These variables are available only to CodeReady Workspaces plug-ins, which use them for advanced use cases. 4.1.5.4.19. Endpoints Components of any type can specify the endpoints that the Docker image exposes. These endpoints can be made accessible to the users if the CodeReady Workspaces cluster is running using a Kubernetes ingress or an OpenShift route and to the other components within the workspace. You can create an endpoint for your application or database, if your application or database server is listening on a port and you need to be able to directly interact with it yourself or you allow other components to interact with it. Endpoints have several properties as shown in the following example: apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache endpoints: - name: web port: 8080 attributes: discoverable: false public: true protocol: http - type: dockerimage image: postgres memoryLimit: 512Mi env: - name: POSTGRES_USER value: user - name: POSTGRES_PASSWORD value: password - name: POSTGRES_DB value: database endpoints: - name: postgres port: 5432 attributes: discoverable: true public: false Here, there are two Docker images, each defining a single endpoint. Endpoint is an accessible port that can be made accessible inside the workspace or also publicly (example, from the UI). Each endpoint has a name and port, which is the port on which certain server running inside the container is listening. The following are a few attributes that you can set on the endpoint: discoverable : If an endpoint is discoverable, it means that it can be accessed using its name as the hostname within the workspace containers (in the OpenShift terminology, a service is created for it with the provided name). 55 public : The endpoint will be accessible outside of the workspace, too (such endpoint can be accessed from the CodeReady Workspaces user interface). Such endpoints are publicized always on port 80 or 443 (depending on whether tls is enabled in CodeReady Workspaces). protocol : For public endpoints the protocol is a hint to the UI on how to construct the URL for the endpoint access. Typical values are http , https , ws , wss . secure : A boolean value (defaulting to false ) specifying whether the endpoint is put behind a JWT proxy requiring a JWT workspace token to grant access. The JWT proxy is deployed in the same Pod as the server and assumes the server listens solely on the local loop-back interface, such as 127.0.0.1 . Warning Listening on any other interface than the local loop-back poses a security risk because such server is accessible without the JWT authentication within the cluster network on the corresponding IP addresses. path : The path portion of the URL to the endpoint. This defaults to / , meaning that the endpoint is assumed to be accessible at the web root of the server defined by the component. unsecuredPaths : A comma-separated list of endpoint paths that are to stay unsecured even if the secure attribute is set to true . cookiesAuthEnabled : When set to true (the default is false ), the JWT workspace token is automatically fetched and included in a workspace-specific cookie to allow requests to pass through the JWT proxy. Warning This setting potentially allows a CSRF attack when used in conjunction with a server using POST requests. When starting a new server within a component, CodeReady Workspaces automatically detects this, and the UI offers to expose this port as a public port automatically. This behavior is useful for debugging a web application. It is impossible to do this for servers, such as a database server, which automatically starts at the container start. For such components, specify the endpoints explicitly. Example specifying endpoints for kubernetes / openshift and chePlugin / cheEditor types: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/ endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: chePlugin id: redhat/php/latest memoryLimit: 1Gi endpoints: - name: 'php-endpoint' port: 7777 - type: chePlugin alias: theia-editor id: eclipse/che-theia/ endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: openshift alias: webapp reference: webapp.yaml endpoints: - name: 'web' port: 8080 attributes: discoverable: false public: true protocol: http - type: openshift alias: mongo reference: mongo-db.yaml endpoints: - name: 'mongo-db' port: 27017 attributes: discoverable: true public: false 4.1.5.4.20. OpenShift resources To describe complex deployments, include references to OpenShift resource lists in the devfile. The OpenShift resource lists become a part of the workspace. Important CodeReady Workspaces merges all resources from the OpenShift resource lists into a single deployment. Be careful when designing such lists to avoid name conflicts and other problems. Table 4.1. Supported OpenShift resources Platform Supported resources OpenShift deployments , pods , services , persistent volume claims , secrets , ConfigMaps , Routes apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml The preceding component references a file that is relative to the location of the devfile itself. Meaning, this devfile is only loadable by a CodeReady Workspaces factory to which you supply the location of the devfile and therefore it is able to figure out the location of the referenced OpenShift resource list. The following is an example of the postgres.yaml file. apiVersion: v1 kind: List items: - apiVersion: v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: template: metadata: name: postgres app: name: postgres spec: containers: - image: postgres name: postgres ports: - name: postgres containerPort: 5432 volumeMounts: - name: pg-storage mountPath: /var/lib/postgresql/data volumes: - name: pg-storage persistentVolumeClaim: claimName: pg-storage - apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres name: postgres spec: ports: - port: 5432 targetPort: 5432 selector: app: postgres - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pg-storage labels: app: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi For a basic example of a devfile with an associated OpenShift list, see web-nodejs-with-db-sample on redhat-developer GitHub. If you use generic or large resource lists from which you will only need a subset of resources, you can select particular resources from the list using a selector (which, as the usual OpenShift selectors, works on the labels of the resources in the list). apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml selector: app: postgres Additionally, you can modify the entrypoints (command and arguments) of the containers in the resource list. 4.1.5.5. Adding commands to a devfile A devfile allows to specify commands to be available for execution in a workspace. Every command can contain a subset of actions, which are related to a specific component in whose container it will be executed. commands: - name: build actions: - type: exec component: mysql command: mvn clean workdir: /projects/spring-petclinic You can use commands to automate the workspace. You can define commands for building and testing your code, or cleaning the database. The following are two kinds of commands: CodeReady Workspaces specific commands: You have full control over what component executes the command. Editor specific commands: You can use the editor-specific command definitions (example: tasks.json and launch.json in Che-Theia, which is equivalent to how these files work in Visual Studio Code). 4.1.5.5.1. CodeReady Workspaces-specific commands Each CodeReady Workspaces-specific command features: An actions attribute that specifies a command to execute. A component attribute that specifies the container in which to execute the command. The commands are run using the default shell in the container. apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang alias: go-cli memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache commands: - name: compile and run actions: - type: exec component: go-cli command: "go get -d && go run main.go" workdir: "USD{CHE_PROJECTS_ROOT}/src/github.com/acme/my-go-project" Note If a component to be used in a command must have an alias. This alias is used to reference the component in the command definition. Example: alias: go-cli in the component definition and component: go-cli in the command definition. This ensures that Red Hat CodeReady Workspaces can find the correct container to run the command in. A command can have only one action. 4.1.5.5.2. Editor-specific commands If the editor in the workspace supports it, the devfile can specify additional configuration in the editor-specific format. This is dependent on the integration code in the workspace editor itself and so is not a generic mechanism. However, the default Che-Theia editor within Red Hat CodeReady Workspaces is equipped to understand the tasks.json and launch.json files provided in the devfile. apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git commands: - name: tasks actions: - type: vscode-task referenceContent: > { "version": "2.0.0", "tasks": [ { "label": "create test file", "type": "shell", "command": "touch USD{workspaceFolder}/test.file" } ] } This example shows association of a tasks.json file with a devfile. Notice the vscode-task type that instructs the Che-Theia editor to interpret this command as a tasks definition and referenceContent attribute that contains the contents of the file itself. You can also save this file separately from the devfile and use reference attribute to specify a relative or absolute URL to it. In addition to the vscode-task commands, the Che-Theia editor understands vscode-launch type using which you can specify the start configurations. 4.1.5.5.3. Command preview URL It is possible to specify a preview URL for commands that expose web UI. This URL is offered for opening when the command is executed. commands: - name: tasks previewUrl: port: 8080 1 path: /myweb 2 actions: - type: exec component: go-cli command: "go run webserver.go" workdir: USD{CHE_PROJECTS_ROOT}/webserver 1 TCP port where the application listens. Mandatory parameter. 2 The path part of the URL to the UI. Optional parameter. The default is root ( / ). The example above opens http://__<server-domain>__/myweb , where <server-domain> is the URL to the dynamically created OpenShift Route. 4.1.5.5.3.1. Setting the default way of opening preview URLs By default, a notification that asks the user about the URL opening preference is displayed. To specify the preferred way of previewing a service URL: Open CodeReady Workspaces preferences in File Settings Open Preferences and find che.task.preview.notifications in the CodeReady Workspaces section. Choose from the list of possible values: on - enables a notification for asking the user about the URL opening preferences alwaysPreview - the preview URL opens automatically in the Preview panel as soon as a task is running alwaysGoTo - the preview URL opens automatically in a separate browser tab as soon as a task is running off - disables opening the preview URL (automatically and with a notification) 4.1.5.6. Adding attributes to a devfile Devfile attributes can be used to configure various features. 4.1.5.6.1. Attribute: editorFree When an editor is not specified in a devfile, a default is provided. When no editor is needed, use the editorFree attribute. The default value of false means that the devfile requests the provisioning of the default editor. Example of a devfile without an editor apiVersion: 1.0.0 metadata: name: petclinic-dev-environment components: - alias: myApp type: kubernetes reference: my-app.yaml attributes: editorFree: true 4.1.5.6.2. Attribute: persistVolumes (ephemeral mode) By default, volumes and PVCs specified in a devfile are bound to a host folder to persist data even after a container restart. To disable data persistence to make the workspace faster, such as when the volume back end is slow, modify the persistVolumes attribute in the devfile. The default value is true . Set to false to use emptyDir for configured volumes and PVC. Example of a devfile with ephemeral mode enabled apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false 4.1.5.6.3. Attribute: asyncPersist (asynchronous storage) When persistVolumes is set to false (see above), the additional attribute asyncPersist can be set to true to enable asynchronous storage. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#configuring-storage-types.adoc for more details. Example of a devfile with asynchronous storage enabled apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false asyncPersist: true 4.1.5.6.4. Attribute: mergePlugins This property can be set to manually control how plug-ins are included in the workspace. When the property mergePlugins is set to true , Che will attempt to avoid running multiple instances of the same container by combining plugins. The default value when this property is not included in a devfile is governed by the Che configuration property che.workspace.plugin_broker.default_merge_plugins ; adding the mergePlugins: false attribute to a devfile will disable plug-in merging for that workspace. Example of a devfile with plug-in merging disabled apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: mergePlugins: false 4.1.6. Objects supported in Red Hat CodeReady Workspaces 2.15 The following table lists the objects that are partially supported in Red Hat CodeReady Workspaces 2.15: Object API Kubernetes Infra OpenShift Infra Notes Pod Kubernetes Yes Yes - Deployment Kubernetes Yes Yes - ConfigMap Kubernetes Yes Yes - PVC Kubernetes Yes Yes - Secret Kubernetes Yes Yes - Service Kubernetes Yes Yes - Ingress Kubernetes Yes No Minishift allows you to create Ingress and it works when the host is specified (OpenShift creates a route for it). But, the loadBalancer IP is not provisioned. To add Ingress support for the OpenShift infrastructure node, generate routes based on the provided Ingress. Route OpenShift No Yes The OpenShift recipe must be made compatible with the Kubernetes Infrastructure: OpenShift routes replaced on Ingresses. Template OpenShift Yes Yes The Kubernetes API does not support templates. A workspace with a template in the recipe starts successfully and the default parameters are resolved. 4.2. Authoring a devfile 2 When you author or edit a devfile for configuring a workspace, the devfile must meet the latest devfile 2 specification. Prerequisites An instance of CodeReady Workspaces with the Dev Workspace Operator enabled. See Installing CodeReady Workspaces . Procedure Follow the instructions in the Devfile User Guide . Additional resources For more information about devfile object schema and object properties, see the Introduction to Devfiles . | [
"apiVersion: 1.0.0 metadata: name: crw-in-crw-out",
"metadata: generatedName:",
"metadata: name:",
"apiVersion: 1.0.0 metadata: generateName: crw-",
"apiVersion: 1.0.0 metadata: name: minimal-workspace",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/spring-projects/spring-petclinic.git' components: - type: chePlugin id: redhat/java/latest",
"apiVersion: 1.0.0 metadata: name: example-devfile projects: - name: <frontend> source: type: git location: https://github.com/ <github-organization> / <frontend> .git - name: <backend> clonePath: src/github.com/ <github-organization> / <backend> source: type: git location: https://github.com/ <github-organization> / <backend> .git",
"schemaVersion: 1.0.0",
"schemaVersion: 1.0.0 metadata: name: devfile-sample",
"schemaVersion: 1.0.0 metadata: generateName: devfile-sample-",
"source: type: git location: https://github.com/eclipse-che/che-server.git startPoint: main 1 tag: 7.34.0 commitId: 36fe587 branch: 7.34.x sparseCheckoutDir: core 2",
"source: type: zip location: http://host.net/path/project-src.zip",
"apiVersion: 1.0.0 metadata: name: my-project-dev projects: - name: my-project-resourse clonePath: resources/my-project source: type: zip location: http://host.net/path/project-res.zip - name: my-project source: type: git location: https://github.com/my-org/project.git branch: develop",
"components: - alias: theia-editor type: cheEditor id: eclipse/che-theia/next",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest",
"components: - alias: exec-plugin type: chePlugin registryUrl: https://my-customregistry.com id: eclipse/che-machine-exec-plugin/latest",
"components: - alias: exec-plugin type: chePlugin reference: https://raw.githubusercontent.com.../plugin/1.0.1/meta.yaml",
"id: redhat/java/latest type: chePlugin preferences: java.jdt.ls.vmargs: '-noverify -Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication'",
"id: redhat/java/latest type: chePlugin preferences: go.lintFlags: [\"--enable-all\", \"--new\"]",
"components: - alias: mysql type: kubernetes reference: petclinic.yaml selector: app.kubernetes.io/name: mysql app.kubernetes.io/component: database app.kubernetes.io/part-of: petclinic",
"components: - alias: mysql type: kubernetes reference: petclinic.yaml referenceContent: | kind: List items: - apiVersion: v1 kind: Pod metadata: name: ws spec: containers: ... etc",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml entrypoints: - parentName: mysqlServer command: ['sleep'] args: ['infinity'] - parentSelector: app: prometheus args: ['-f', '/opt/app/prometheus-config.yaml']",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml env: - name: ENV_VAR value: value",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml mountSources: true",
"components: - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly volumes: - name: mavenrepo containerPath: /root/.m2 env: - name: ENV_VAR value: value endpoints: - name: maven-server port: 3101 attributes: protocol: http secure: 'true' public: 'true' discoverable: 'false' memoryLimit: 1536M memoryRequest: 256M command: ['tail'] args: ['-f', '/dev/null']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumes: - name: cache containerPath: /.cache",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/next env: - name: HOME value: USD(CHE_PROJECTS_ROOT) volumes: - name: cache containerPath: /.cache",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: openshift alias: mongo reference: mongo-db.yaml volumes: - name: mongo-persistent-storage containerPath: /data/db",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest memoryLimit: 1Gi - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512M",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest memoryLimit: 1Gi memoryRequest: 512M - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512M memoryRequest: 256M",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest cpuLimit: 1.5 - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly cpuLimit: 750m",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/latest cpuLimit: 1.5 cpuRequest: 0.225 - alias: maven type: dockerimage image: quay.io/eclipse/che-java11-maven:nightly cpuLimit: 750m cpuRequest: 450m",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - type: cheEditor alias: theia-editor id: eclipse/che-theia/next memoryLimit: 2Gi env: - name: HOME value: USD(CHE_PROJECTS_ROOT)",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache endpoints: - name: web port: 8080 attributes: discoverable: false public: true protocol: http - type: dockerimage image: postgres memoryLimit: 512Mi env: - name: POSTGRES_USER value: user - name: POSTGRES_PASSWORD value: password - name: POSTGRES_DB value: database endpoints: - name: postgres port: 5432 attributes: discoverable: true public: false",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/next endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: chePlugin id: redhat/php/latest memoryLimit: 1Gi endpoints: - name: 'php-endpoint' port: 7777 - type: chePlugin alias: theia-editor id: eclipse/che-theia/next endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: openshift alias: webapp reference: webapp.yaml endpoints: - name: 'web' port: 8080 attributes: discoverable: false public: true protocol: http - type: openshift alias: mongo reference: mongo-db.yaml endpoints: - name: 'mongo-db' port: 27017 attributes: discoverable: true public: false",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml",
"apiVersion: v1 kind: List items: - apiVersion: v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: template: metadata: name: postgres app: name: postgres spec: containers: - image: postgres name: postgres ports: - name: postgres containerPort: 5432 volumeMounts: - name: pg-storage mountPath: /var/lib/postgresql/data volumes: - name: pg-storage persistentVolumeClaim: claimName: pg-storage - apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres name: postgres spec: ports: - port: 5432 targetPort: 5432 selector: app: postgres - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pg-storage labels: app: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml selector: app: postgres",
"commands: - name: build actions: - type: exec component: mysql command: mvn clean workdir: /projects/spring-petclinic",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang alias: go-cli memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache commands: - name: compile and run actions: - type: exec component: go-cli command: \"go get -d && go run main.go\" workdir: \"USD{CHE_PROJECTS_ROOT}/src/github.com/acme/my-go-project\"",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git commands: - name: tasks actions: - type: vscode-task referenceContent: > { \"version\": \"2.0.0\", \"tasks\": [ { \"label\": \"create test file\", \"type\": \"shell\", \"command\": \"touch USD{workspaceFolder}/test.file\" } ] }",
"commands: - name: tasks previewUrl: port: 8080 1 path: /myweb 2 actions: - type: exec component: go-cli command: \"go run webserver.go\" workdir: USD{CHE_PROJECTS_ROOT}/webserver",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment components: - alias: myApp type: kubernetes reference: my-app.yaml attributes: editorFree: true",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false asyncPersist: true",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: mergePlugins: false"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/authoring-devfiles_crw |
8.3.3. Performing a Network Installation | 8.3.3. Performing a Network Installation When you start an installation with the askmethod or repo= options, you can install Red Hat Enterprise Linux from a network server using FTP, HTTP, HTTPS, or NFS protocols. Anaconda uses the same network connection to consult additional software repositories later in the installation process. If your system has more than one network device, anaconda presents you with a list of all available devices and prompts you to select one to use during installation. If your system only has a single network device, anaconda automatically selects it and does not present this dialog. Figure 8.6. Networking Device If you are not sure which device in the list corresponds to which physical socket on the system, select a device in the list then press the Identify button. The Identify NIC dialog appears. Figure 8.7. Identify NIC The sockets of most network devices feature an activity light (also called a link light ) - an LED that flashes to indicate that data is flowing through the socket. Anaconda can flash the activity light of the network device that you selected in the Networking Device dialog for up to 30 seconds. Enter the number of seconds that you require, then press OK . When anaconda finishes flashing the light, it returns you to the Networking Device dialog. When you select a network device, anaconda prompts you to choose how to configure TCP/IP: IPv4 options Dynamic IP configuration (DHCP) Anaconda uses DHCP running on the network to supply the network configuration automatically. Manual configuration Anaconda prompts you to enter the network configuration manually, including the IP address for this system, the netmask, the gateway address, and the DNS address. IPv6 options Automatic Anaconda uses router advertisement (RA) and DHCP for automatic configuration, based on the network environment. (Equivalent to the Automatic option in NetworkManager ) Automatic, DHCP only Anaconda does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. (Equivalent to the Automatic, DHCP only option in NetworkManager ) Manual configuration Anaconda prompts you to enter the network configuration manually, including the IP address for this system, the netmask, the gateway address, and the DNS address. Anaconda supports the IPv4 and IPv6 protocols. However, if you configure an interface to use both IPv4 and IPv6, the IPv4 connection must succeed or the interface will not work, even if the IPv6 connection succeeds. Figure 8.8. Configure TCP/IP By default, anaconda uses DHCP to provide network settings automatically for IPv4 and automatic configuration to provide network settings for IPv6. If you choose to configure TCP/IP manually, anaconda prompts you to provide the details in the Manual TCP/IP Configuration dialog: Figure 8.9. Manual TCP/IP Configuration The dialog provides fields for IPv4 and IPv6 addresses and prefixes, depending on the protocols that you chose to configure manually, together with fields for the network gateway and name server. Enter the details for your network, then press OK . When the installation process completes, it will transfer these settings to your system. If you are installing via NFS, proceed to Section 8.3.4, "Installing via NFS" . If you are installing via Web or FTP, proceed to Section 8.3.5, "Installing via FTP, HTTP, or HTTPS" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-perform-nfs-x86 |
Chapter 8. Prometheus [monitoring.coreos.com/v1] | Chapter 8. Prometheus [monitoring.coreos.com/v1] Description Prometheus defines a Prometheus deployment. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 8.1.1. .spec Description Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalAlertManagerConfigs object AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. additionalAlertRelabelConfigs object AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. additionalArgs array AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. additionalScrapeConfigs object AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. affinity object Defines the Pods' affinity scheduling rules if specified. alerting object Defines the settings related to Alertmanager. allowOverlappingBlocks boolean AllowOverlappingBlocks enables vertical compaction and vertical query merge in Prometheus. Deprecated: this flag has no effect for Prometheus >= 2.39.0 where overlapping blocks are enabled by default. apiserverConfig object APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. arbitraryFSAccessThroughSMs object When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. baseImage string Deprecated: use 'spec.image' instead. bodySizeLimit string BodySizeLimit defines per-scrape on response body size. Only valid in Prometheus versions 2.45.0 and newer. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/prometheus/configmaps/<configmap-name> in the 'prometheus' container. containers array Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. disableCompaction boolean When true, the Prometheus compaction is disabled. enableAdminAPI boolean Enables access to the Prometheus web admin API. WARNING: Enabling the admin APIs enables mutating endpoints, to delete data, shutdown Prometheus, and more. Enabling this should be done with care and the user is advised to add additional authentication authorization via a proxy to ensure only clients authorized to perform these actions can do so. For more information: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis enableFeatures array (string) Enable access to Prometheus feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. For more information see https://prometheus.io/docs/prometheus/latest/feature_flags/ enableRemoteWriteReceiver boolean Enable Prometheus to be used as a receiver for the Prometheus remote write protocol. WARNING: This is not considered an efficient way of ingesting samples. Use it with caution for specific low-volume use cases. It is not suitable for replacing the ingestion via scraping and turning Prometheus into a push-based metrics collection system. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver It requires Prometheus >= v2.33.0. enforcedBodySizeLimit string When defined, enforcedBodySizeLimit specifies a global limit on the size of uncompressed response body that will be accepted by Prometheus. Targets responding with a body larger than this many bytes will cause the scrape to fail. It requires Prometheus >= v2.28.0. enforcedKeepDroppedTargets integer When defined, enforcedKeepDroppedTargets specifies a global limit on the number of targets dropped by relabeling that will be kept in memory. The value overrides any spec.keepDroppedTargets set by ServiceMonitor, PodMonitor, Probe objects unless spec.keepDroppedTargets is greater than zero and less than spec.enforcedKeepDroppedTargets . It requires Prometheus >= v2.47.0. enforcedLabelLimit integer When defined, enforcedLabelLimit specifies a global limit on the number of labels per sample. The value overrides any spec.labelLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelLimit is greater than zero and less than spec.enforcedLabelLimit . It requires Prometheus >= v2.27.0. enforcedLabelNameLengthLimit integer When defined, enforcedLabelNameLengthLimit specifies a global limit on the length of labels name per sample. The value overrides any spec.labelNameLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelNameLengthLimit is greater than zero and less than spec.enforcedLabelNameLengthLimit . It requires Prometheus >= v2.27.0. enforcedLabelValueLengthLimit integer When not null, enforcedLabelValueLengthLimit defines a global limit on the length of labels value per sample. The value overrides any spec.labelValueLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelValueLengthLimit is greater than zero and less than spec.enforcedLabelValueLengthLimit . It requires Prometheus >= v2.27.0. enforcedNamespaceLabel string When not empty, a label will be added to 1. All metrics scraped from ServiceMonitor , PodMonitor , Probe and ScrapeConfig objects. 2. All metrics generated from recording rules defined in PrometheusRule objects. 3. All alerts generated from alerting rules defined in PrometheusRule objects. 4. All vector selectors of PromQL expressions defined in PrometheusRule objects. The label will not added for objects referenced in spec.excludedFromEnforcement . The label's name is this field's value. The label's value is the namespace of the ServiceMonitor , PodMonitor , Probe or PrometheusRule object. enforcedSampleLimit integer When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any spec.sampleLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.sampleLimit is greater than zero and less than than spec.enforcedSampleLimit . It is meant to be used by admins to keep the overall number of samples/series under a desired limit. enforcedTargetLimit integer When defined, enforcedTargetLimit specifies a global limit on the number of scraped targets. The value overrides any spec.targetLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.targetLimit is greater than zero and less than spec.enforcedTargetLimit . It is meant to be used by admins to to keep the overall number of targets under a desired limit. evaluationInterval string Interval between rule evaluations. Default: "30s" excludedFromEnforcement array List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. excludedFromEnforcement[] object ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. exemplars object Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. externalLabels object (string) The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). Labels defined by spec.replicaExternalLabelName and spec.prometheusExternalLabelName take precedence over this list. externalUrl string The external URL under which the Prometheus service is externally available. This is necessary to generate correct URLs (for instance if Prometheus is accessible behind an Ingress resource). hostAliases array Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostNetwork boolean Use the host's network namespace if true. Make sure to understand the security implications if you want to enable it ( https://kubernetes.io/docs/concepts/configuration/overview/ ). When hostNetwork is enabled, this will set the DNS policy to ClusterFirstWithHostNet automatically. ignoreNamespaceSelectors boolean When true, spec.namespaceSelector from all PodMonitor, ServiceMonitor and Probe objects will be ignored. They will only discover targets within the namespace of the PodMonitor, ServiceMonitor and Probe object. image string Container image name for Prometheus. If specified, it takes precedence over the spec.baseImage , spec.tag and spec.sha fields. Specifying spec.version is still necessary to ensure the Prometheus Operator knows which version of Prometheus is being configured. If neither spec.image nor spec.baseImage are defined, the operator will use the latest upstream version of Prometheus available at the time when the operator was released. imagePullPolicy string Image pull policy for the 'prometheus', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. keepDroppedTargets integer Per-scrape limit on the number of targets dropped by relabeling that will be kept in memory. 0 means no limit. It requires Prometheus >= v2.47.0. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. listenLocal boolean When true, the Prometheus server listens on the loopback address instead of the Pod IP's address. logFormat string Log format for Log level for Prometheus and the config-reloader sidecar. logLevel string Log level for Prometheus and the config-reloader sidecar. minReadySeconds integer Minimum number of seconds for which a newly created Pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Defines on which Nodes the Pods are scheduled. overrideHonorLabels boolean When true, Prometheus resolves label conflicts by renaming the labels in the scraped data to "exported_<label value>" for all targets created from service and pod monitors. Otherwise the HonorLabels field of the service or pod monitor applies. overrideHonorTimestamps boolean When true, Prometheus ignores the timestamps for all the targets created from service and pod monitors. Otherwise the HonorTimestamps field of the service or pod monitor applies. paused boolean When a Prometheus deployment is paused, no actions except for deletion will be performed on the underlying objects. persistentVolumeClaimRetentionPolicy object The field controls if and how PVCs are deleted during the lifecycle of a StatefulSet. The default behavior is all PVCs are retained. This is an alpha field from kubernetes 1.23 until 1.26 and a beta field from 1.26. It requires enabling the StatefulSetAutoDeletePVC feature gate. podMetadata object PodMetadata configures labels and annotations which are propagated to the Prometheus pods. The following items are reserved and cannot be overridden: * "prometheus" label, set to the name of the Prometheus object. * "app.kubernetes.io/instance" label, set to the name of the Prometheus object. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "prometheus". * "app.kubernetes.io/version" label, set to the Prometheus version. * "operator.prometheus.io/name" label, set to the name of the Prometheus object. * "operator.prometheus.io/shard" label, set to the shard number of the Prometheus object. * "kubectl.kubernetes.io/default-container" annotation, set to "prometheus". podMonitorNamespaceSelector object Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. podMonitorSelector object Experimental PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. podTargetLabels array (string) PodTargetLabels are appended to the spec.podTargetLabels field of all PodMonitor and ServiceMonitor objects. portName string Port name used for the pods and governing service. Default: "web" priorityClassName string Priority class assigned to the Pods. probeNamespaceSelector object Experimental Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. probeSelector object Experimental Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. prometheusExternalLabelName string Name of Prometheus external label used to denote the Prometheus instance name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus" prometheusRulesExcludedFromEnforce array Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. prometheusRulesExcludedFromEnforce[] object PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. query object QuerySpec defines the configuration of the Promethus query service. queryLogFile string queryLogFile specifies where the file to which PromQL queries are logged. If the filename has an empty path, e.g. 'query.log', The Prometheus Pods will mount the file into an emptyDir volume at /var/log/prometheus . If a full path is provided, e.g. '/var/log/prometheus/query.log', you must mount a volume in the specified directory and it must be writable. This is because the prometheus container runs with a read-only root filesystem for security reasons. Alternatively, the location can be set to a standard I/O stream, e.g. /dev/stdout , to log query information to the default Prometheus log stream. reloadStrategy string Defines the strategy used to reload the Prometheus configuration. If not specified, the configuration is reloaded using the /-/reload HTTP endpoint. remoteRead array Defines the list of remote read configurations. remoteRead[] object RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. remoteWrite array Defines the list of remote write configurations. remoteWrite[] object RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. replicaExternalLabelName string Name of Prometheus external label used to denote the replica name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus_replica" replicas integer Number of replicas of each shard to deploy for a Prometheus deployment. spec.replicas multiplied by spec.shards is the total number of Pods created. Default: 1 resources object Defines the resources requests and limits of the 'prometheus' container. retention string How long to retain the Prometheus data. Default: "24h" if spec.retention and spec.retentionSize are empty. retentionSize string Maximum number of bytes used by the Prometheus data. routePrefix string The route prefix Prometheus registers HTTP handlers for. This is useful when using spec.externalURL , and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . ruleNamespaceSelector object Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. ruleSelector object PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. rules object Defines the configuration of the Prometheus rules' engine. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. scrapeConfigNamespaceSelector object Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only. scrapeConfigSelector object Experimental ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. scrapeInterval string Interval between consecutive scrapes. Default: "30s" scrapeTimeout string Number of seconds to wait until a scrape request times out. secrets array (string) Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/prometheus/secrets/<secret-name> in the 'prometheus' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. serviceMonitorNamespaceSelector object Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. serviceMonitorSelector object ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. sha string Deprecated: use 'spec.image' instead. The image's digest can be specified as part of the image name. shards integer EXPERIMENTAL: Number of shards to distribute targets onto. spec.replicas multiplied by spec.shards is the total number of Pods created. Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. Sharding is performed on the content of the address target meta-label for PodMonitors and ServiceMonitors and _param_target_ for Probes. Default: 1 storage object Storage defines the storage used by Prometheus. tag string Deprecated: use 'spec.image' instead. The image's tag can be specified as part of the image name. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. thanos object Defines the configuration of the optional Thanos sidecar. This section is experimental, it may change significantly without deprecation notice in any release. tolerations array Defines the Pods' tolerations if specified. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array Defines the pod's topology spread constraints if specified. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. tracingConfig object EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way. tsdb object Defines the runtime reloadable configuration of the timeseries database (TSDB). version string Version of Prometheus being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream version of Prometheus available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. walCompression boolean Configures compression of the write-ahead log (WAL) using Snappy. WAL compression is enabled by default for Prometheus >= 2.20.0 Requires Prometheus v2.11.0 and above. web object Defines the configuration of the Prometheus web server. 8.1.2. .spec.additionalAlertManagerConfigs Description AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.3. .spec.additionalAlertRelabelConfigs Description AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.4. .spec.additionalArgs Description AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.5. .spec.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.6. .spec.additionalScrapeConfigs Description AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.7. .spec.affinity Description Defines the Pods' affinity scheduling rules if specified. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 8.1.8. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 8.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 8.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 8.1.11. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.12. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.13. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.14. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 8.1.15. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 8.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 8.1.18. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.19. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.20. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.21. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 8.1.22. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.23. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.28. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.29. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.30. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.31. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.32. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.36. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.37. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.38. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.39. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.40. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.41. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.46. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.47. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.48. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.49. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.50. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.54. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.55. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.56. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.57. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.58. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.59. .spec.alerting Description Defines the settings related to Alertmanager. Type object Required alertmanagers Property Type Description alertmanagers array AlertmanagerEndpoints Prometheus should fire alerts against. alertmanagers[] object AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. 8.1.60. .spec.alerting.alertmanagers Description AlertmanagerEndpoints Prometheus should fire alerts against. Type array 8.1.61. .spec.alerting.alertmanagers[] Description AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. Type object Required name namespace port Property Type Description apiVersion string Version of the Alertmanager API that Prometheus uses to send alerts. It can be "v1" or "v2". authorization object Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , bearerTokenFile or sigv4 . basicAuth object BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , authorization or sigv4 . bearerTokenFile string File to read bearer token for Alertmanager. Cannot be set at the same time as basicAuth , authorization , or sigv4 . Deprecated: this will be removed in a future release. Prefer using authorization . enableHttp2 boolean Whether to enable HTTP2. name string Name of the Endpoints object in the namespace. namespace string Namespace of the Endpoints object. pathPrefix string Prefix for the HTTP path alerts are pushed to. port integer-or-string Port on which the Alertmanager API is exposed. scheme string Scheme to use when firing alerts. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.48.0. Cannot be set at the same time as basicAuth , bearerTokenFile or authorization . timeout string Timeout is a per-target Alertmanager timeout when pushing alerts. tlsConfig object TLS Config to use for Alertmanager. 8.1.62. .spec.alerting.alertmanagers[].authorization Description Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , bearerTokenFile or sigv4 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.63. .spec.alerting.alertmanagers[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.64. .spec.alerting.alertmanagers[].basicAuth Description BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , authorization or sigv4 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.65. .spec.alerting.alertmanagers[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.66. .spec.alerting.alertmanagers[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.67. .spec.alerting.alertmanagers[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.48.0. Cannot be set at the same time as basicAuth , bearerTokenFile or authorization . Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 8.1.68. .spec.alerting.alertmanagers[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.69. .spec.alerting.alertmanagers[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.70. .spec.alerting.alertmanagers[].tlsConfig Description TLS Config to use for Alertmanager. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.71. .spec.alerting.alertmanagers[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.72. .spec.alerting.alertmanagers[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.73. .spec.alerting.alertmanagers[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.74. .spec.alerting.alertmanagers[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.75. .spec.alerting.alertmanagers[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.76. .spec.alerting.alertmanagers[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.77. .spec.alerting.alertmanagers[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.78. .spec.apiserverConfig Description APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. Type object Required host Property Type Description authorization object Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . basicAuth object BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File to read bearer token for accessing apiserver. Cannot be set at the same time as basicAuth , authorization , or bearerToken . Deprecated: this will be removed in a future release. Prefer using authorization . host string Kubernetes API address consisting of a hostname or IP address followed by an optional port number. tlsConfig object TLS Config to use for the API server. 8.1.79. .spec.apiserverConfig.authorization Description Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.80. .spec.apiserverConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.81. .spec.apiserverConfig.basicAuth Description BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.82. .spec.apiserverConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.83. .spec.apiserverConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.84. .spec.apiserverConfig.tlsConfig Description TLS Config to use for the API server. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.85. .spec.apiserverConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.86. .spec.apiserverConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.87. .spec.apiserverConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.88. .spec.apiserverConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.89. .spec.apiserverConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.90. .spec.apiserverConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.91. .spec.apiserverConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.92. .spec.arbitraryFSAccessThroughSMs Description When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. Type object Property Type Description deny boolean 8.1.93. .spec.containers Description Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.94. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.95. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.96. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.97. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.98. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.99. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.100. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.101. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.102. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.103. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.104. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 8.1.105. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 8.1.106. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.107. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.108. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.109. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.110. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.111. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.112. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.113. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.114. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.115. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.116. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.117. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.118. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.119. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.120. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.121. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.122. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.123. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.124. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.125. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.126. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.127. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.128. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.129. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.130. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.131. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.132. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.133. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.134. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.135. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.136. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.137. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.138. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.139. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.140. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.141. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.142. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.143. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.144. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.145. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.146. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.147. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.148. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.149. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.150. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.151. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.152. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.153. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.154. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.155. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.156. .spec.excludedFromEnforcement Description List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. Type array 8.1.157. .spec.excludedFromEnforcement[] Description ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. Type object Required namespace resource Property Type Description group string Group of the referent. When not specified, it defaults to monitoring.coreos.com name string Name of the referent. When not set, all resources in the namespace are matched. namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resource string Resource of the referent. 8.1.158. .spec.exemplars Description Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. Type object Property Type Description maxSize integer Maximum number of exemplars stored in memory for all series. exemplar-storage itself must be enabled using the spec.enableFeature option for exemplars to be scraped in the first place. If not set, Prometheus uses its default value. A value of zero or less than zero disables the storage. 8.1.159. .spec.hostAliases Description Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. Type array 8.1.160. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 8.1.161. .spec.imagePullSecrets Description An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 8.1.162. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.163. .spec.initContainers Description InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.164. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.165. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.166. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.167. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.168. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.169. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.170. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.171. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.172. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.173. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.174. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 8.1.175. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 8.1.176. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.177. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.178. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.179. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.180. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.181. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.182. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.183. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.184. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.185. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.186. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.187. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.188. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.189. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.190. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.191. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.192. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.193. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.194. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.195. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.196. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.197. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.198. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.199. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.200. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.201. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.202. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.203. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.204. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.205. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.206. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.207. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.208. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.209. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.210. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.211. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.212. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.213. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.214. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.215. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.216. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.217. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.218. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.219. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.220. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.221. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.222. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.223. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.224. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.225. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.226. .spec.persistentVolumeClaimRetentionPolicy Description The field controls if and how PVCs are deleted during the lifecycle of a StatefulSet. The default behavior is all PVCs are retained. This is an alpha field from kubernetes 1.23 until 1.26 and a beta field from 1.26. It requires enabling the StatefulSetAutoDeletePVC feature gate. Type object Property Type Description whenDeleted string WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of Retain causes PVCs to not be affected by StatefulSet deletion. The Delete policy causes those PVCs to be deleted. whenScaled string WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of Retain causes PVCs to not be affected by a scaledown. The Delete policy causes the associated PVCs for any excess pods above the replica count to be deleted. 8.1.227. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Prometheus pods. The following items are reserved and cannot be overridden: * "prometheus" label, set to the name of the Prometheus object. * "app.kubernetes.io/instance" label, set to the name of the Prometheus object. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "prometheus". * "app.kubernetes.io/version" label, set to the Prometheus version. * "operator.prometheus.io/name" label, set to the name of the Prometheus object. * "operator.prometheus.io/shard" label, set to the shard number of the Prometheus object. * "kubectl.kubernetes.io/default-container" annotation, set to "prometheus". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.228. .spec.podMonitorNamespaceSelector Description Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.229. .spec.podMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.230. .spec.podMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.231. .spec.podMonitorSelector Description Experimental PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.232. .spec.podMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.233. .spec.podMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.234. .spec.probeNamespaceSelector Description Experimental Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.235. .spec.probeNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.236. .spec.probeNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.237. .spec.probeSelector Description Experimental Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.238. .spec.probeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.239. .spec.probeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.240. .spec.prometheusRulesExcludedFromEnforce Description Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. Type array 8.1.241. .spec.prometheusRulesExcludedFromEnforce[] Description PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. Type object Required ruleName ruleNamespace Property Type Description ruleName string Name of the excluded PrometheusRule object. ruleNamespace string Namespace of the excluded PrometheusRule object. 8.1.242. .spec.query Description QuerySpec defines the configuration of the Promethus query service. Type object Property Type Description lookbackDelta string The delta difference allowed for retrieving metrics during expression evaluations. maxConcurrency integer Number of concurrent queries that can be run at once. maxSamples integer Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return. timeout string Maximum time a query may take before being aborted. 8.1.243. .spec.remoteRead Description Defines the list of remote read configurations. Type array 8.1.244. .spec.remoteRead[] Description RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read the bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . filterExternalLabels boolean Whether to use the external labels as selectors for the remote read endpoint. It requires Prometheus >= v2.34.0. followRedirects boolean Configure whether HTTP requests follow HTTP 3xx redirects. It requires Prometheus >= v2.26.0. headers object (string) Custom HTTP headers to be sent along with each remote read request. Be aware that headers that are set by Prometheus itself can't be overwritten. Only valid in Prometheus versions 2.26.0 and newer. name string The name of the remote read queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate read configurations. It requires Prometheus >= v2.15.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . proxyUrl string Optional ProxyURL. readRecent boolean Whether reads should be made for queries for time ranges that the local storage should have complete data for. remoteTimeout string Timeout for requests to the remote read endpoint. requiredMatchers object (string) An optional list of equality matchers which have to be present in a selector to query the remote read endpoint. tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to query from. 8.1.245. .spec.remoteRead[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.246. .spec.remoteRead[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.247. .spec.remoteRead[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.248. .spec.remoteRead[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.249. .spec.remoteRead[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.250. .spec.remoteRead[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 8.1.251. .spec.remoteRead[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.252. .spec.remoteRead[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.253. .spec.remoteRead[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.254. .spec.remoteRead[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.255. .spec.remoteRead[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.256. .spec.remoteRead[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.257. .spec.remoteRead[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.258. .spec.remoteRead[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.259. .spec.remoteRead[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.260. .spec.remoteRead[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.261. .spec.remoteRead[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.262. .spec.remoteRead[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.263. .spec.remoteWrite Description Defines the list of remote write configurations. Type array 8.1.264. .spec.remoteWrite[] Description RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , oauth2 , or azureAd . azureAd object AzureAD for the URL. It requires Prometheus >= v2.45.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or sigv4 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , oauth2 , or azureAd . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . headers object (string) Custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Prometheus itself can't be overwritten. It requires Prometheus >= v2.25.0. metadataConfig object MetadataConfig configures the sending of series metadata to the remote storage. name string The name of the remote write queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate queues. It requires Prometheus >= v2.15.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , basicAuth , or azureAd . proxyUrl string Optional ProxyURL. queueConfig object QueueConfig allows tuning of the remote write queue parameters. remoteTimeout string Timeout for requests to the remote write endpoint. sendExemplars boolean Enables sending of exemplars over remote write. Note that exemplar-storage itself must be enabled using the spec.enableFeature option for exemplars to be scraped in the first place. It requires Prometheus >= v2.27.0. sendNativeHistograms boolean Enables sending of native histograms, also known as sparse histograms over remote write. It requires Prometheus >= v2.40.0. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or azureAd . tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to send samples to. writeRelabelConfigs array The list of remote write relabel configurations. writeRelabelConfigs[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config 8.1.265. .spec.remoteWrite[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , oauth2 , or azureAd . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.266. .spec.remoteWrite[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.267. .spec.remoteWrite[].azureAd Description AzureAD for the URL. It requires Prometheus >= v2.45.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or sigv4 . Type object Property Type Description cloud string The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'. managedIdentity object ManagedIdentity defines the Azure User-assigned Managed identity. Cannot be set at the same time as oauth . oauth object OAuth defines the oauth config that is being used to authenticate. Cannot be set at the same time as managedIdentity . It requires Prometheus >= v2.48.0. 8.1.268. .spec.remoteWrite[].azureAd.managedIdentity Description ManagedIdentity defines the Azure User-assigned Managed identity. Cannot be set at the same time as oauth . Type object Required clientId Property Type Description clientId string The client id 8.1.269. .spec.remoteWrite[].azureAd.oauth Description OAuth defines the oauth config that is being used to authenticate. Cannot be set at the same time as managedIdentity . It requires Prometheus >= v2.48.0. Type object Required clientId clientSecret tenantId Property Type Description clientId string clientID is the clientId of the Azure Active Directory application that is being used to authenticate. clientSecret object clientSecret specifies a key of a Secret containing the client secret of the Azure Active Directory application that is being used to authenticate. tenantId string tenantID is the tenant ID of the Azure Active Directory application that is being used to authenticate. 8.1.270. .spec.remoteWrite[].azureAd.oauth.clientSecret Description clientSecret specifies a key of a Secret containing the client secret of the Azure Active Directory application that is being used to authenticate. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.271. .spec.remoteWrite[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , oauth2 , or azureAd . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.272. .spec.remoteWrite[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.273. .spec.remoteWrite[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.274. .spec.remoteWrite[].metadataConfig Description MetadataConfig configures the sending of series metadata to the remote storage. Type object Property Type Description send boolean Defines whether metric metadata is sent to the remote storage or not. sendInterval string Defines how frequently metric metadata is sent to the remote storage. 8.1.275. .spec.remoteWrite[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , basicAuth , or azureAd . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 8.1.276. .spec.remoteWrite[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.277. .spec.remoteWrite[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.278. .spec.remoteWrite[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.279. .spec.remoteWrite[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.280. .spec.remoteWrite[].queueConfig Description QueueConfig allows tuning of the remote write queue parameters. Type object Property Type Description batchSendDeadline string BatchSendDeadline is the maximum time a sample will wait in buffer. capacity integer Capacity is the number of samples to buffer per shard before we start dropping them. maxBackoff string MaxBackoff is the maximum retry delay. maxRetries integer MaxRetries is the maximum number of times to retry a batch on recoverable errors. maxSamplesPerSend integer MaxSamplesPerSend is the maximum number of samples per send. maxShards integer MaxShards is the maximum number of shards, i.e. amount of concurrency. minBackoff string MinBackoff is the initial retry delay. Gets doubled for every retry. minShards integer MinShards is the minimum number of shards, i.e. amount of concurrency. retryOnRateLimit boolean Retry upon receiving a 429 status code from the remote-write storage. This is experimental feature and might change in the future. 8.1.281. .spec.remoteWrite[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or azureAd . Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 8.1.282. .spec.remoteWrite[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.283. .spec.remoteWrite[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.284. .spec.remoteWrite[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.285. .spec.remoteWrite[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.286. .spec.remoteWrite[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.287. .spec.remoteWrite[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.288. .spec.remoteWrite[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.289. .spec.remoteWrite[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.290. .spec.remoteWrite[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.291. .spec.remoteWrite[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.292. .spec.remoteWrite[].writeRelabelConfigs Description The list of remote write relabel configurations. Type array 8.1.293. .spec.remoteWrite[].writeRelabelConfigs[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.294. .spec.resources Description Defines the resources requests and limits of the 'prometheus' container. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.295. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.296. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.297. .spec.ruleNamespaceSelector Description Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.298. .spec.ruleNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.299. .spec.ruleNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.300. .spec.ruleSelector Description PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.301. .spec.ruleSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.302. .spec.ruleSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.303. .spec.rules Description Defines the configuration of the Prometheus rules' engine. Type object Property Type Description alert object Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. 8.1.304. .spec.rules.alert Description Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. Type object Property Type Description forGracePeriod string Minimum duration between alert and restored 'for' state. This is maintained only for alerts with a configured 'for' time greater than the grace period. forOutageTolerance string Max time to tolerate prometheus outage for restoring 'for' state of alert. resendDelay string Minimum amount of time to wait before resending an alert to Alertmanager. 8.1.305. .spec.scrapeConfigNamespaceSelector Description Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.306. .spec.scrapeConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.307. .spec.scrapeConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.308. .spec.scrapeConfigSelector Description Experimental ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.309. .spec.scrapeConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.310. .spec.scrapeConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.311. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.312. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.313. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.314. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 8.1.315. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 8.1.316. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.317. .spec.serviceMonitorNamespaceSelector Description Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.318. .spec.serviceMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.319. .spec.serviceMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.320. .spec.serviceMonitorSelector Description ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.321. .spec.serviceMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.322. .spec.serviceMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.323. .spec.storage Description Storage defines the storage used by Prometheus. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 8.1.324. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.325. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.326. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.327. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.328. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.329. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.330. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.331. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.332. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.333. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.334. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.335. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.336. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.337. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 8.1.338. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.339. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.340. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.341. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.342. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.343. .spec.storage.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.344. .spec.storage.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.345. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.346. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.347. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.348. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. 8.1.349. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 8.1.350. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 8.1.351. .spec.thanos Description Defines the configuration of the optional Thanos sidecar. This section is experimental, it may change significantly without deprecation notice in any release. Type object Property Type Description additionalArgs array AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. baseImage string Deprecated: use 'image' instead. blockSize string BlockDuration controls the size of TSDB blocks produced by Prometheus. The default value is 2h to match the upstream Prometheus defaults. WARNING: Changing the block duration can impact the performance and efficiency of the entire Prometheus/Thanos stack due to how it interacts with memory and Thanos compactors. It is recommended to keep this value set to a multiple of 120 times your longest scrape or rule interval. For example, 30s * 120 = 1h. getConfigInterval string How often to retrieve the Prometheus configuration. getConfigTimeout string Maximum time to wait when retrieving the Prometheus configuration. grpcListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the gRPC endpoints. It has no effect if listenLocal is true. grpcServerTlsConfig object Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. httpListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the HTTP endpoints. It has no effect if listenLocal is true. image string Container image name for Thanos. If specified, it takes precedence over the spec.thanos.baseImage , spec.thanos.tag and spec.thanos.sha fields. Specifying spec.thanos.version is still necessary to ensure the Prometheus Operator knows which version of Thanos is being configured. If neither spec.thanos.image nor spec.thanos.baseImage are defined, the operator will use the latest upstream version of Thanos available at the time when the operator was released. listenLocal boolean Deprecated: use grpcListenLocal and httpListenLocal instead. logFormat string Log format for the Thanos sidecar. logLevel string Log level for the Thanos sidecar. minTime string Defines the start of time range limit served by the Thanos sidecar's StoreAPI. The field's value should be a constant time in RFC3339 format or a time duration relative to current time, such as -1d or 2h45m. Valid duration units are ms, s, m, h, d, w, y. objectStorageConfig object Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. objectStorageConfigFile string Defines the Thanos sidecar's configuration file to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ This field takes precedence over objectStorageConfig. readyTimeout string ReadyTimeout is the maximum time that the Thanos sidecar will wait for Prometheus to start. resources object Defines the resources requests and limits of the Thanos sidecar. sha string Deprecated: use 'image' instead. The image digest can be specified as part of the image name. tag string Deprecated: use 'image' instead. The image's tag can be specified as part of the image name. tracingConfig object Defines the tracing configuration for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. tracingConfigFile takes precedence over this field. tracingConfigFile string Defines the tracing configuration file for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. This field takes precedence over tracingConfig. version string Version of Thanos being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream release of Thanos available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. 8.1.352. .spec.thanos.additionalArgs Description AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.353. .spec.thanos.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.354. .spec.thanos.grpcServerTlsConfig Description Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.355. .spec.thanos.grpcServerTlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.356. .spec.thanos.grpcServerTlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.357. .spec.thanos.grpcServerTlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.358. .spec.thanos.grpcServerTlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.359. .spec.thanos.grpcServerTlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.360. .spec.thanos.grpcServerTlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.361. .spec.thanos.grpcServerTlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.362. .spec.thanos.objectStorageConfig Description Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.363. .spec.thanos.resources Description Defines the resources requests and limits of the Thanos sidecar. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.364. .spec.thanos.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.365. .spec.thanos.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.366. .spec.thanos.tracingConfig Description Defines the tracing configuration for the Thanos sidecar. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature, it may change in any upcoming release in a breaking way. tracingConfigFile takes precedence over this field. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.367. .spec.thanos.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. Type array 8.1.368. .spec.thanos.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.369. .spec.tolerations Description Defines the Pods' tolerations if specified. Type array 8.1.370. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 8.1.371. .spec.topologySpreadConstraints Description Defines the pod's topology spread constraints if specified. Type array 8.1.372. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 8.1.373. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.374. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.375. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.376. .spec.tracingConfig Description EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way. Type object Required endpoint Property Type Description clientType string Client used to export the traces. Supported values are http or grpc . compression string Compression key for supported compression types. The only supported value is gzip . endpoint string Endpoint to send the traces to. Should be provided in format <host>:<port>. headers object (string) Key-value pairs to be used as headers associated with gRPC or HTTP requests. insecure boolean If disabled, the client will use a secure connection. samplingFraction integer-or-string Sets the probability a given trace will be sampled. Must be a float from 0 through 1. timeout string Maximum time the exporter will wait for each batch export. tlsConfig object TLS Config to use when sending traces. 8.1.377. .spec.tracingConfig.tlsConfig Description TLS Config to use when sending traces. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 8.1.378. .spec.tracingConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.379. .spec.tracingConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.380. .spec.tracingConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.381. .spec.tracingConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.382. .spec.tracingConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.383. .spec.tracingConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.384. .spec.tracingConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.385. .spec.tsdb Description Defines the runtime reloadable configuration of the timeseries database (TSDB). Type object Property Type Description outOfOrderTimeWindow string Configures how old an out-of-order/out-of-bounds sample can be with respect to the TSDB max time. An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). Out of order ingestion is an experimental feature. It requires Prometheus >= v2.39.0. 8.1.386. .spec.volumeMounts Description VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. Type array 8.1.387. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.388. .spec.volumes Description Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 8.1.389. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 8.1.390. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 8.1.391. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 8.1.392. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 8.1.393. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 8.1.394. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.395. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 8.1.396. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.397. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.398. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.399. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.400. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 8.1.401. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.402. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.403. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 8.1.404. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.405. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.406. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.407. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.408. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.409. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.410. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.411. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.412. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.413. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.414. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.415. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.416. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 8.1.417. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.418. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.419. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.420. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 8.1.421. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 8.1.422. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.423. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 8.1.424. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 8.1.425. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 8.1.426. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 8.1.427. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 8.1.428. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 8.1.429. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.430. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 8.1.431. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 8.1.432. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 8.1.433. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 8.1.434. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 8.1.435. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 8.1.436. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 8.1.437. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.438. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.439. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.440. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.441. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 8.1.442. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.443. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.444. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.445. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 8.1.446. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.447. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.448. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 8.1.449. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 8.1.450. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 8.1.451. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.452. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 8.1.453. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.454. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 8.1.455. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.456. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.457. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 8.1.458. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 8.1.459. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 8.1.460. .spec.web Description Defines the configuration of the Prometheus web server. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. maxConnections integer Defines the maximum number of simultaneous connections A zero value means that Prometheus doesn't accept any incoming connection. pageTitle string The prometheus web page title. tlsConfig object Defines the TLS parameters for HTTPS. 8.1.461. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 8.1.462. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 8.1.463. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 8.1.464. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.465. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.466. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.467. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.468. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 8.1.469. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.470. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 8.1.471. .status Description Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Prometheus deployment. conditions array The current state of the Prometheus deployment. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Prometheus deployment (their labels match the selector). shardStatuses array The list has one entry per shard. Each entry provides a summary of the shard status. shardStatuses[] object unavailableReplicas integer Total number of unavailable pods targeted by this Prometheus deployment. updatedReplicas integer Total number of non-terminated pods targeted by this Prometheus deployment that have the desired version spec. 8.1.472. .status.conditions Description The current state of the Prometheus deployment. Type array 8.1.473. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 8.1.474. .status.shardStatuses Description The list has one entry per shard. Each entry provides a summary of the shard status. Type array 8.1.475. .status.shardStatuses[] Description Type object Required availableReplicas replicas shardID unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this shard. replicas integer Total number of pods targeted by this shard. shardID string Identifier of the shard. unavailableReplicas integer Total number of unavailable pods targeted by this shard. updatedReplicas integer Total number of non-terminated pods targeted by this shard that have the desired spec. 8.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheuses GET : list objects of kind Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses DELETE : delete collection of Prometheus GET : list objects of kind Prometheus POST : create Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} DELETE : delete Prometheus GET : read the specified Prometheus PATCH : partially update the specified Prometheus PUT : replace the specified Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status GET : read status of the specified Prometheus PATCH : partially update status of the specified Prometheus PUT : replace status of the specified Prometheus 8.2.1. /apis/monitoring.coreos.com/v1/prometheuses HTTP method GET Description list objects of kind Prometheus Table 8.1. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty 8.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses HTTP method DELETE Description delete collection of Prometheus Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Prometheus Table 8.3. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty HTTP method POST Description create Prometheus Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body Prometheus schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 202 - Accepted Prometheus schema 401 - Unauthorized Empty 8.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the Prometheus HTTP method DELETE Description delete Prometheus Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Prometheus Table 8.10. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Prometheus Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Prometheus Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body Prometheus schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty 8.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status Table 8.16. Global path parameters Parameter Type Description name string name of the Prometheus HTTP method GET Description read status of the specified Prometheus Table 8.17. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Prometheus Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Prometheus Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body Prometheus schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring_apis/prometheus-monitoring-coreos-com-v1 |
B.2. iSCSI Disks During Start Up | B.2. iSCSI Disks During Start Up Events related to iSCSI might occur at a number of points while the system is starting: The init script in the initrd will log into iSCSI targets used for / , if any. This is done using the iscsistart utility, without requiring iscsid to run. Note If the root file system is on an iSCSI disk connected using IPv6, ensure that the installed system is using the correct ip= boot option, for example ip=eth0:auto6 . If this option is not set, the installed system can spend up to 20 minutes at boot time attempting to establish a connection, before eventually succeeding. Using the correct ip= option eliminates this delay. When the root file system has been mounted and the various service init scripts are running, the iscsi init script will get called. This script then starts the iscsid daemon if any iSCSI targets are used for / , or if any targets in the iSCSI database are marked to be logged into automatically. After the classic network service script has been run, the iscsi init script will run. If the network is accessible, this will log into any targets in the iSCSI database that are marked to be logged into automatically. If the network is not accessible, this script will exit quietly. When using NetworkManager to access the network, instead of the classic network service script, NetworkManager will call the iscsi init script. Also see the /etc/NetworkManager/dispatcher.d/04-iscsi file for further reference. Important Because NetworkManager is installed in the /usr directory, you cannot use it to configure network access if /usr is on network-attached storage such as an iSCSI target. If iscsid is not needed as the system starts, it will not start automatically. If you start iscsiadm , iscsiadm will start iscsid in turn. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-iscsi-disks-startup |
RPM Packaging Guide | RPM Packaging Guide Red Hat Enterprise Linux 7 Basic and advanced software packaging scenarios using the RPM package manager Customer Content Services [email protected] Marie Dolezelova Red Hat Customer Content Services [email protected] Maxim Svistunov Red Hat Customer Content Services Adam Miller Red Hat Adam Kvitek Red Hat Customer Content Services Petr Kovar Red Hat Customer Content Services Miroslav Suchy Red Hat | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/rpm_packaging_guide/index |
Chapter 19. Scaling overcloud nodes | Chapter 19. Scaling overcloud nodes Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. If you want to add or remove nodes after the creation of the overcloud, you must update the overcloud. Warning Do not use openstack server delete to remove nodes from the overcloud. Follow the procedures in this section to remove and replace nodes correctly. Note Ensure that your bare metal nodes are not in maintenance mode before you begin scaling out or removing an overcloud node. Use the following table to determine support for scaling each node type: Table 19.1. Scale support for each node type Node type Scale up? Scale down? Notes Controller N N You can replace Controller nodes using the procedures in Chapter 20, Replacing Controller nodes . Compute Y Y Ceph Storage nodes Y N You must have at least 1 Ceph Storage node from the initial overcloud creation. Object Storage nodes Y Y Important Ensure that you have at least 10 GB free space before you scale the overcloud. This free space accommodates image conversion and caching during the node provisioning process. 19.1. Adding nodes to the overcloud You can add more nodes to your overcloud. Note A fresh installation of Red Hat OpenStack Platform does not include certain updates, such as security errata and bug fixes. As a result, if you are scaling up a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, RPM updates are not applied to new nodes. To apply the latest updates to the overcloud nodes, you must do one of the following: Complete an overcloud update of the nodes after the scale-out operation. Use the virt-customize tool to modify the packages to the base overcloud image before the scale-out operation. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize . Procedure Create a new JSON file called newnodes.json that contains details of the new node that you want to register: Register the new nodes: Launch the introspection process for each new node: Use the --provide option to reset all the specified nodes to an available state after introspection. Replace <node_1> , [node_2] , and all nodes up to [node_n] with the UUID of each node that you want to introspect. Configure the image properties for each new node: 19.2. Scaling up bare-metal nodes To increase the count of bare-metal nodes in an existing overcloud, increment the node count in the overcloud-baremetal-deploy.yaml file and redeploy the overcloud. Prerequisites The new bare-metal nodes are registered, introspected, and available for provisioning and deployment. For more information, see Registering nodes for the overcloud and Creating an inventory of the bare-metal node hardware . Procedure Source the stackrc undercloud credential file: Open the overcloud-baremetal-deploy.yaml node definition file that you use to provision your bare-metal nodes. Increment the count parameter for the roles that you want to scale up. For example, the following configuration increases the Object Storage node count to 4: Optional: Configure predictive node placement for the new nodes. For example, use the following configuration to provision a new Object Storage node on node03 : Optional: Define any other attributes that you want to assign to your new nodes. For more information about the properties you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes . If you use the Object Storage service (swift) and the whole disk overcloud image, overcloud-hardened-uefi-full , configure the size of the /srv partition based on the size of your disk and your storage requirements for /var and /srv . For more information, see Configuring whole disk partitions for the Object Storage service . Provision the overcloud nodes: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : Add the generated overcloud-baremetal-deployed.yaml file to the stack with your other environment files and deploy the overcloud: 19.3. Scaling down bare-metal nodes To scale down the number of bare-metal nodes in your overcloud, tag the nodes that you want to delete from the stack in the node definition file, redeploy the overcloud, and then delete the bare-metal node from the overcloud. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Configuring a basic overcloud with pre-provisioned nodes . If you are replacing an Object Storage node, replicate data from the node you are removing to the new replacement node. Wait for a replication pass to finish on the new node. Check the replication pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage service (swift) adds entries to the log similar to the following example: Procedure Source the stackrc undercloud credential file: Decrement the count parameter in the overcloud-baremetal-deploy.yaml file, for the roles that you want to scale down. Define the hostname and name of each node that you want to remove from the stack, if they are not already defined in the instances attribute for the role. Add the attribute provisioned: false to the node that you want to remove. For example, to remove the node overcloud-objectstorage-1 from the stack, include the following snippet in your overcloud-baremetal-deploy.yaml file: After you redeploy the overcloud, the nodes that you define with the provisioned: false attribute are no longer present in the stack. However, these nodes are still running in a provisioned state. Note To remove a node from the stack temporarily, deploy the overcloud with the attribute provisioned: false and then redeploy the overcloud with the attribute provisioned: true to return the node to the stack. Delete the node from the overcloud: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Note Do not include the nodes that you want to remove from the stack as command arguments in the openstack overcloud node delete command. Provision the overcloud nodes to generate an updated heat environment file for inclusion in the deployment command: Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . Add the overcloud-baremetal-deployed.yaml file generated by the provisioning command to the stack with your other environment files, and deploy the overcloud: 19.4. Removing or replacing a Compute node In some situations you need to remove a Compute node from the overcloud. For example, you might need to replace a problematic Compute node. When you delete a Compute node the node's index is added by default to the denylist to prevent the index being reused during scale out operations. You can replace the removed Compute node after you have removed the node from your overcloud deployment. Prerequisites The Compute service is disabled on the nodes that you want to remove to prevent the nodes from scheduling new instances. To confirm that the Compute service is disabled, use the following command: If the Compute service is not disabled then disable it: Tip Use the --disable-reason option to add a short explanation on why the service is being disabled. This is useful if you intend to redeploy the Compute service. The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes . If Instance HA is enabled, choose one of the following options: If the Compute node is accessible, log in to the Compute node as the root user and perform a clean shutdown with the shutdown -h now command. If the Compute node is not accessible, log in to a Controller node as the root user, disable the STONITH device for the Compute node, and shut down the bare metal node: Procedure Source the undercloud configuration: Decrement the count parameter in the overcloud-baremetal-deploy.yaml file, for the roles that you want to scale down. Define the hostname and name of each node that you want to remove from the stack, if they are not already defined in the instances attribute for the role. Add the attribute provisioned: false to the node that you want to remove. For example, to remove the node overcloud-compute-1 from the stack, include the following snippet in your overcloud-baremetal-deploy.yaml file: After you redeploy the overcloud, the nodes that you define with the provisioned: false attribute are no longer present in the stack. However, these nodes are still running in a provisioned state. Note If you want to remove a node from the stack temporarily, you can deploy the overcloud with the attribute provisioned: false and then redeploy the overcloud with the attribute provisioned: true to return the node to the stack. Delete the node from the overcloud: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Note Do not include the nodes that you want to remove from the stack as command arguments in the openstack overcloud node delete command. Provision the overcloud nodes to generate an updated heat environment file for inclusion in the deployment command: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . If Instance HA is enabled, perform the following actions: Clean up the Pacemaker resources for the node: Replace <scaled_down_node> with the name of the removed node. Delete the STONITH device for the node: If you are replacing the removed Compute node on your overcloud deployment, see Replacing a removed Compute node . 19.4.1. Removing a Compute node manually If the openstack overcloud node delete command failed due to an unreachable node, then you must manually complete the removal of the Compute node from the overcloud. Prerequisites Performing the Removing or replacing a Compute node procedure returned a status of UPDATE_FAILED . Procedure Use the openstack tripleo launch heat command to launch the ephemeral Heat process: The command exits after launching the Heat process, the Heat process continues to run in the background as a podman pod. Use the podman pod ps command to verify that the ephemeral-heat process is running: Use the export command to export the OS_CLOUD environment: Use the openstack stack list command to list the installed stacks: Identify the UUID of the node that you want to manually delete: Move the node that you want to delete to maintenance mode: Wait for the Compute service to synchronize its state with the Bare Metal service. This can take up to four minutes. Source the overcloud configuration: Delete the network agents for the node that you deleted: Replace <scaled_down_node> with the name of the node to remove. Confirm that the Compute service is disabled on the deleted node on the overcloud, to prevent the node from scheduling new instances: If the Compute service is not disabled then disable it: Tip Use the --disable-reason option to add a short explanation on why the service is being disabled. This is useful if you intend to redeploy the Compute service. Remove the deleted Compute service as a resource provider from the Placement service: Source the undercloud configuration: Delete the Compute node from the stack: Replace <overcloud> with the name or UUID of the overcloud stack. Replace <node> with the Compute service host name or UUID of the Compute node that you want to delete. Note If the node has already been powered off, this command returns a WARNING message: You can ignore this message. Wait for the overcloud node to delete. Check the status of the overcloud stack when the node deletion is complete: Table 19.2. Result Status Description UPDATE_COMPLETE The delete operation completed successfully. UPDATE_FAILED The delete operation failed. If the overcloud node fails to delete while in maintenance mode, then the problem might be with the hardware. If Instance HA is enabled, perform the following actions: Clean up the Pacemaker resources for the node: Delete the STONITH device for the node: If you are not replacing the removed Compute node on the overcloud, then decrease the ComputeCount parameter in the environment file that contains your node counts. This file is usually named overcloud-baremetal-deployed.yaml . For example, decrease the node count from four nodes to three nodes if you removed one node: Decreasing the node count ensures that director does not provision any new nodes when you run openstack overcloud deploy . If you are replacing the removed Compute node on your overcloud deployment, see Replacing a removed Compute node . 19.4.2. Replacing a removed Compute node To replace a removed Compute node on your overcloud deployment, you can register and inspect a new Compute node or re-add the removed Compute node. You must also configure your overcloud to provision the node. Procedure Optional: To reuse the index of the removed Compute node, configure the RemovalPoliciesMode and the RemovalPolicies parameters for the role to replace the denylist when a Compute node is removed: Replace the removed Compute node: To add a new Compute node, register, inspect, and tag the new node to prepare it for provisioning. For more information, see Configuring and deploying the overcloud . To re-add a Compute node that you removed manually, remove the node from maintenance mode: Rerun the openstack overcloud deploy command that you used to deploy the existing overcloud. Wait until the deployment process completes. Confirm that director has successfully registered the new Compute node: If you performed step 1 to set the RemovalPoliciesMode for the role to update , then you must reset the RemovalPoliciesMode for the role to the default value, append , to add the Compute node index to the current denylist when a Compute node is removed: Rerun the openstack overcloud deploy command that you used to deploy the existing overcloud. 19.5. Replacing Ceph Storage nodes You can use director to replace Ceph Storage nodes in a director-created cluster. For more information, see the Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director guide. 19.6. Using skip deploy identifier During a stack update operation puppet, by default, reapplies all manifests. This can result in a time consuming operation, which may not be required. To override the default operation, use the skip-deploy-identifier option. Use this option if you do not want the deployment command to generate a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps only trigger if there is an actual change to the configuration. Use this option with caution and only if you are confident that you do not need to run the software configuration, such as scaling out certain roles. Note If there is a change to the puppet manifest or hierdata, puppet will reapply all manifests even when --skip-deploy-identifier is specified. 19.7. Blacklisting nodes You can exclude overcloud nodes from receiving an updated deployment. This is useful in scenarios where you want to scale new nodes and exclude existing nodes from receiving an updated set of parameters and resources from the core heat template collection. This means that the blacklisted nodes are isolated from the effects of the stack operation. Use the DeploymentServerBlacklist parameter in an environment file to create a blacklist. Setting the blacklist The DeploymentServerBlacklist parameter is a list of server names. Write a new environment file, or add the parameter value to an existing custom environment file and pass the file to the deployment command: Note The server names in the parameter value are the names according to OpenStack Orchestration (heat), not the actual server hostnames. Include this environment file with your openstack overcloud deploy command: Heat blacklists any servers in the list from receiving updated heat deployments. After the stack operation completes, any blacklisted servers remain unchanged. You can also power off or stop the os-collect-config agents during the operation. Warning Exercise caution when you blacklist nodes. Only use a blacklist if you fully understand how to apply the requested change with a blacklist in effect. It is possible to create a hung stack or configure the overcloud incorrectly when you use the blacklist feature. For example, if cluster configuration changes apply to all members of a Pacemaker cluster, blacklisting a Pacemaker cluster member during this change can cause the cluster to fail. Do not use the blacklist during update or upgrade procedures. Those procedures have their own methods for isolating changes to particular servers. When you add servers to the blacklist, further changes to those nodes are not supported until you remove the server from the blacklist. This includes updates, upgrades, scale up, scale down, and node replacement. For example, when you blacklist existing Compute nodes while scaling out the overcloud with new Compute nodes, the blacklisted nodes miss the information added to /etc/hosts and /etc/ssh/ssh_known_hosts . This can cause live migration to fail, depending on the destination host. The Compute nodes are updated with the information added to /etc/hosts and /etc/ssh/ssh_known_hosts during the overcloud deployment where they are no longer blacklisted. Do not modify the /etc/hosts and /etc/ssh/ssh_known_hosts files manually. To modify the /etc/hosts and /etc/ssh/ssh_known_hosts files, run the overcloud deploy command as described in the Clearing the Blacklist section. Clearing the blacklist To clear the blacklist for subsequent stack operations, edit the DeploymentServerBlacklist to use an empty array: Warning Do not omit the DeploymentServerBlacklist parameter. If you omit the parameter, the overcloud deployment uses the previously saved value. | [
"{ \"nodes\":[ { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.168.24.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.168.24.208\" } ] }",
"source ~/stackrc (undercloud)USD openstack overcloud node import newnodes.json",
"(undercloud)USD openstack overcloud node introspect --provide <node_1> [node_2] [node_n]",
"(undercloud)USD openstack overcloud node configure <node>",
"source ~/stackrc",
"- name: Controller count: 3 - name: Compute count: 10 - name: ObjectStorage count: 4",
"- name: ObjectStorage count: 4 instances: - hostname: overcloud-objectstorage-0 name: node00 - hostname: overcloud-objectstorage-1 name: node01 - hostname: overcloud-objectstorage-2 name: node02 - hostname: overcloud-objectstorage-3 name: node03",
"(undercloud)USD openstack overcloud node provision --stack <stack> --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/overcloud-baremetal-deployed.yaml --deployed-server --disable-validations",
"Mar 29 08:49:05 localhost object-server: Object replication complete. Mar 29 08:49:11 localhost container-server: Replication run OVER Mar 29 08:49:13 localhost account-server: Replication run OVER",
"source ~/stackrc",
"- name: ObjectStorage count: 3 instances: - hostname: overcloud-objectstorage-0 name: node00 - hostname: overcloud-objectstorage-1 name: node01 # Removed from cluster due to disk failure provisioned: false - hostname: overcloud-objectstorage-2 name: node02 - hostname: overcloud-objectstorage-3 name: node03",
"(undercloud)USD openstack overcloud node delete --stack <stack> --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD openstack overcloud node provision --stack <stack> --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD openstack overcloud deploy -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml --deployed-server --disable-validations",
"(overcloud)USD openstack compute service list",
"(overcloud)USD openstack compute service set <hostname> nova-compute --disable",
"pcs stonith disable <stonith_resource_name> [stack@undercloud ~]USD source stackrc [stack@undercloud ~]USD openstack baremetal node power off <UUID>",
"(overcloud)USD source ~/stackrc",
"- name: Compute count: 2 instances: - hostname: overcloud-compute-0 name: node00 - hostname: overcloud-compute-1 name: node01 # Removed from cluster due to disk failure provisioned: false - hostname: overcloud-compute-2 name: node02",
"(undercloud)USD openstack overcloud node delete --stack <stack> --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD openstack overcloud node provision --stack <stack> --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml",
"sudo pcs resource delete <scaled_down_node> sudo cibadmin -o nodes --delete --xml-text '<node id=\"<scaled_down_node>\"/>' sudo cibadmin -o fencing-topology --delete --xml-text '<fencing-level target=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete --xml-text '<node_state id=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete-all --xml-text '<node id=\"<scaled_down_node>\"/>' --force",
"sudo pcs stonith delete <device-name>",
"(undercloud)USD openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/overcloud/heat-launcher --restore-db",
"(undercloud)USD sudo podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 958b141609b2 ephemeral-heat Running 2 minutes ago 44447995dbcf 3",
"(undercloud)USD export OS_CLOUD=heat",
"(undercloud)USD openstack stack list +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | ID | Stack Name | Project | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | 761e2a54-c6f9-4e0f-abe6-c8e0ad51a76c | overcloud | admin | CREATE_COMPLETE | 2022-08-29T20:48:37Z | None | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node maintenance set <node_uuid>",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD for AGENT in USD(openstack network agent list --host <scaled_down_node> -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"(overcloud)USD openstack compute service list",
"(overcloud)USD openstack compute service set <hostname> nova-compute --disable",
"(overcloud)USD openstack resource provider list (overcloud)USD openstack resource provider delete <uuid>",
"(overcloud)USD source ~/stackrc",
"(undercloud)USD openstack overcloud node delete --stack <overcloud> <node>",
"Ansible failed, check log at `~/ansible.log` WARNING: Scale-down configuration error. Manual cleanup of some actions may be necessary. Continuing with node removal.",
"(undercloud)USD openstack stack list",
"sudo pcs resource delete <scaled_down_node> sudo cibadmin -o nodes --delete --xml-text '<node id=\"<scaled_down_node>\"/>' sudo cibadmin -o fencing-topology --delete --xml-text '<fencing-level target=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete --xml-text '<node_state id=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete-all --xml-text '<node id=\"<scaled_down_node>\"/>' --force",
"sudo pcs stonith delete <device-name>",
"parameter_defaults: ComputeCount: 3",
"parameter_defaults: <RoleName>RemovalPoliciesMode: update <RoleName>RemovalPolicies: [{'resource_list': []}]",
"(undercloud)USD openstack baremetal node maintenance unset <node_uuid>",
"(undercloud)USD openstack baremetal node list",
"parameter_defaults: <RoleName>RemovalPoliciesMode: append",
"openstack overcloud deploy --skip-deploy-identifier",
"parameter_defaults: DeploymentServerBlacklist: - overcloud-compute-0 - overcloud-compute-1 - overcloud-compute-2",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e server-blacklist.yaml [OTHER OPTIONS]",
"parameter_defaults: DeploymentServerBlacklist: []"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_scaling-overcloud-nodes |
Chapter 11. CertSecretSource schema reference | Chapter 11. CertSecretSource schema reference Used in: ClientTls , KafkaAuthorizationKeycloak , KafkaAuthorizationOpa , KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationOAuth Property Property type Description certificate string The name of the file certificate in the Secret. secretName string The name of the Secret containing the certificate. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-CertSecretSource-reference |
Chapter 10. Changing the MTU for the cluster network | Chapter 10. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN cluster network providers. 10.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not normally need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network providers. 10.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 10.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your cluster network provider: OVN-Kubernetes : 100 bytes OpenShift SDN : 50 bytes If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your cluster network provider from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . 10.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 10.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the cluster network provider. For OVN-Kubernetes, the overhead is 100 bytes and for OpenShift SDN the overhead is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the cluster network provider and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 10.2. Changing the cluster MTU As a cluster administrator, you can change the maximum transmission unit (MTU) for your cluster. The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update rolls out. The following procedure describes how to change the cluster MTU by using either machine configs, DHCP, or an ISO. If you use the DHCP or ISO approach, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You identified the target MTU for your cluster. The correct MTU varies depending on the cluster network provider that your cluster uses: OVN-Kubernetes : The cluster MTU must be set to 100 less than the lowest hardware MTU value in your cluster. OpenShift SDN : The cluster MTU must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To increase or decrease the MTU for the cluster network complete the following procedure. To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN cluster network provider, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes cluster network provider, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.11.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.11.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value for <machine_to> and for OVN-Kubernetes must be 100 less and for OpenShift SDN must be 50 less. <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. If the machine config is successfully deployed, the output contains the /etc/NetworkManager/system-connections/<connection_name> file path. The machine config must not contain the ExecStart=/usr/local/bin/mtu-migration.sh line. To finalize the MTU migration, enter one of the following commands: If you are using the OVN-Kubernetes cluster network provider: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . If you are using the OpenShift SDN cluster network provider: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . After finalizing the MTU migration, each MCP node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification You can verify that a node in your cluster uses an MTU that you specified in the procedure. To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node. To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 10.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli | [
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.11.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.11.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get mcp",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/changing-cluster-network-mtu |
Chapter 5. Reviewing the pre-conversion analysis report using Insights | Chapter 5. Reviewing the pre-conversion analysis report using Insights To assess whether your CentOS Linux systems can be converted to RHEL, run the Pre-conversion analysis for converting to RHEL task. The pre-conversion analysis generates a report that summarizes potential problems and suggests recommended solutions. The report also helps you decide whether it is possible or advisable to proceed with the conversion to RHEL. Prerequisites You have completed the steps listed in Preparing for a RHEL conversion using Insights . Procedure Log in to the Red Hat Hybrid Cloud Console and go to Red Hat Enterprise Linux > Insights for RHEL > Automation toolkit > Tasks . Locate the Pre-conversion analysis for converting to RHEL task and click Select systems . Alternatively, log in to the Red Hat Hybrid Cloud Console , go to Red Hat Enterprise Linux > Insights for RHEL > Inventory > Systems , choose a system you want to convert and click Convert system to RHEL label. In the Task name field type the name of the task and select the CentOS Linux 7 systems that you want to analyze for conversion. Click . Configure the pre-conversion analysis task with the following settings: Do not use the ELS subscription Choose this option if you plan to upgrade your RHEL system to version 8 or higher. Allow kernel modules outside of RHEL repositories on the system Choose this option to allow the pre-conversion analysis to ignore kernel modules that are not part of RHEL repositories. Allow outdated kernel on the system Choose this option to allow the pre-conversion analysis to ignore when your system is booted from an outdated kernel. Allow outdated packages on the system Choose this option to allow the pre-conversion analysis to ignore all outdated packages on the system. Click Run task . The pre-conversion analysis can take up to an hour to complete. The pre-conversion analysis utility generates a new report in the Activity tab. Select the report to view a summary of issues found in each system. You can also review further by selecting a system to view each issue and, when applicable, a potential remediation in detail. Each issue is assigned a severity level: Inhibitor Would cause the conversion to fail because it is very likely to result in a deteriorated system state. You must resolve this issue before converting. Overridable inhibitor Would cause the conversion to fail because it is very likely to result in a deteriorated system state. You must resolve or manually override this issue before converting. For more details about which inhibitors you can override, see the step about configuring the settings of the pre-conversion analysis in this procedure. Skipped Could not run this test because of a prerequisite test failing. Could cause the conversion to fail. Warning Would not cause the conversion to fail. System and application issues might occur after the conversion. Info Informational with no expected impact to the system or applications. After reviewing the report and resolving all reported issues, click Run task again to rerun the analysis and confirm that there are no issues outstanding. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility_in_red_hat_insights/reviewing-the-pre-conversion-analysis-report-using-insights_converting-from-a-linux-distribution-to-rhel-in-insights |
Chapter 1. Getting started overview | Chapter 1. Getting started overview Use Red Hat AMQ Streams to create and set up Kafka clusters, then connect your applications and services to those clusters. This guide describes how to install and start using AMQ Streams on OpenShift Container Platform. You can install the AMQ Streams operator directly from the OperatorHub in the OpenShift web console. The AMQ Streams operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of AMQ Streams that allows you to take advantage of automatic updates. When the AMQ Streams operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages. Note If you require more flexibility with your deployment, you can use the installation artifacts provided with AMQ Streams. For more information on using the installation artifacts, see Deploying and Upgrading AMQ Streams on OpenShift . 1.1. Prerequsites The following prerequisites are required for getting started with AMQ Streams. You have a Red Hat account. JDK 11 or later is installed. An OpenShift 4.12 and later cluster is available. The OpenShift oc command-line tool is installed and configured to connect to the running cluster. The steps to get started are based on using the OperatorHub in the OpenShift web console, but you'll also use the OpenShift oc CLI tool to perform certain operations. You'll need to connect to your OpenShift cluster using the oc tool. You can install the oc CLI tool from the web console by clicking the '?' help menu, then Command Line Tools . You can copy the required oc login details from the web console by clicking your profile name, then Copy login command . 1.2. Additional resources Strimzi Overview Deploying and Upgrading AMQ Streams on OpenShift | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/getting_started_with_amq_streams_on_openshift/getting_started_overview |
Chapter 15. The Football Quickstart Endpoint Examples | Chapter 15. The Football Quickstart Endpoint Examples The Football application is a simple example to illustrate the use of Red Hat JBoss Data Grid endpoints, namely Hot Rod, REST, and Memcached. Each example shows one of these protocols used to connect to JBoss Data Grid to remotely store, retrieve, and remove data from caches. Each application is a variation of a simple football team manager utility as a console application. Features The following features are available with the example Football Manager application: Add a team Add players Remove all entities (teams and players) Listing all teams and players Location JBoss Data Grid's Football quickstart can be found at the following locations: jboss-datagrid-{VERSION}-quickstarts/rest-endpoint jboss-datagrid-{VERSION}-quickstarts/hotrod-endpoint jboss-datagrid-{VERSION}-quickstarts/memcached-endpoint Report a bug 15.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 6.0 (Java SDK 1.6) or better JBoss Enterprise Application Platform 6.x or JBoss Enterprise Web Server 2.x Maven 3.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-the_football_quickstart_endpoint_examples |
Chapter 39. JMS - IBM MQ Kamelet Sink | Chapter 39. JMS - IBM MQ Kamelet Sink A Kamelet that can produce events to an IBM MQ message queue using JMS. 39.1. Configuration Options The following table summarizes the configuration options available for the jms-ibm-mq-sink Kamelet: Property Name Description Type Default Example channel * IBM MQ Channel Name of the IBM MQ Channel string destinationName * Destination Name The destination name string password * Password Password to authenticate to IBM MQ server string queueManager * IBM MQ Queue Manager Name of the IBM MQ Queue Manager string serverName * IBM MQ Server name IBM MQ Server name or address string serverPort * IBM MQ Server Port IBM MQ Server port integer 1414 username * Username Username to authenticate to IBM MQ server string clientId IBM MQ Client ID Name of the IBM MQ Client ID string destinationType Destination Type The JMS destination type (queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 39.2. Dependencies At runtime, the jms-ibm-mq-sink Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0 39.3. Usage This section describes how you can use the jms-ibm-mq-sink . 39.3.1. Knative Sink You can use the jms-ibm-mq-sink Kamelet as a Knative sink by binding it to a Knative object. jms-ibm-mq-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd 39.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 39.3.1.2. Procedure for using the cluster CLI Save the jms-ibm-mq-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-ibm-mq-sink-binding.yaml 39.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' This command creates the KameletBinding in the current namespace on the cluster. 39.3.2. Kafka Sink You can use the jms-ibm-mq-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jms-ibm-mq-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-sink properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd 39.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 39.3.2.2. Procedure for using the cluster CLI Save the jms-ibm-mq-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-ibm-mq-sink-binding.yaml 39.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' This command creates the KameletBinding in the current namespace on the cluster. 39.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-ibm-mq-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd",
"apply -f jms-ibm-mq-sink-binding.yaml",
"kamel bind --name jms-ibm-mq-sink-binding timer-source?message=\"Hello IBM MQ!\" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-sink properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd",
"apply -f jms-ibm-mq-sink-binding.yaml",
"kamel bind --name jms-ibm-mq-sink-binding timer-source?message=\"Hello IBM MQ!\" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/jms-ibm-mq-sink |
4.101. iscsi-initiator-utils | 4.101. iscsi-initiator-utils 4.101.1. RHBA-2011:1722 - iscsi-initiator-utils bug fix and enhancement update An updated iscsi-initiator-utils package that fixes one bug and adds various enhancements is now available for Red Hat Enterprise Linux 6. The iscsi package provides the server daemon for the Internet Small Computer System Interface (iSCSI) protocol, as well as the utility programs used to manage it. iSCSI is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks. Bug Fix BZ# 715434 The iscsiadm utility displayed the discovery2 mode in the help output but did not accept the mode as a valid one. This entry has been replaced with the valid discoverydb mode entry as displayed in the ISCSIADM(8) manual page. Enhancements BZ# 602959 The brcm_iscsiuio daemon did not rotate its log file, /var/log/brcm-iscsi.log. As a consequence, the log file may have filled up the available disk space. The brcm_iscsiuio daemon now supports log rotation, which fixes the problem. BZ# 696808 The brcm_iscsiuio daemon has been updated to provide enhanced support for IPv6 (Internet Protocol version 6), VLAN (Virtual Local Area Network), and Broadcom iSCSI Offload Engine Technology. The daemon has been renamed to iscsiuio with this update. BZ# 749051 The bnx2i driver can now be used for install or boot. To install or boot to targets using this driver, turn on the HBA (Host Bus Adapter) mode in the card's BIOS boot setup screen. In addition, the iSCSI tools can now set up networking and manage sessions for QLogic iSCSI adapters that use the qla4xxx driver. For more information, see section 5.1.2 of the README file which is located in the /usr/share/doc/iscsi-initiator-utils-6.2.0.872 directory. Users are advised to upgrade to this updated iscsi-initiator-utils package, which fixes this bug and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/iscsi-initiator-utils |
7.120. libsoup | 7.120. libsoup 7.120.1. RHBA-2013:0313 - libsoup bug fix update Updated libsoup packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libsoup packages provide an HTTP client and server library for GNOME. Bug Fixes BZ# 657622 Prior to this update, the clock-applet did not handle canceled requests during a DNS lookup correctly and accessed already freed memory. As a consequence, the weather view of the clock-applet could, under certain circumstances, abort with a segmentation fault when updating the weather if the hostname of the weather server needed more than 30 seconds, for example due to network problems. This update modifies the underlying code to allow requests that take too long to be canceled. BZ#746587 Prior to this update, the weather view of the clock-applet tried to connect to the weather server indefinitely as fast as it could if the weather server (or an HTTP proxy) closed the connection without responding. This update modifies the underlying code to retry a request only if the server unexpectedly closes a previously-used connection, not a new connection. Now, libsoup returns a "Connection terminated unexpectedly" error, so the clock-applet does not update the weather display, and tries again later. All users of libsoup are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libsoup |
Chapter 13. Determining whether manual activation of the subscriptions service is necessary | Chapter 13. Determining whether manual activation of the subscriptions service is necessary The subscriptions service must be activated to begin tracking usage for the Red Hat account for your organization. The activation process can be automatic or manual. Procedure Review the following tasks that activate the subscriptions service automatically. If someone in your organization has completed one or more of these tasks, manual activation of the subscriptions service is not needed. Purchasing a pay-as-you-go On-Demand subscription for Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated through Red Hat Marketplace. As the pay-as-you-go clusters begin reporting usage through OpenShift Cluster Manager and the monitoring stack, the subscriptions service activates automatically for the organization. Purchasing an Red Hat OpenShift pay-as-you-go On-Demand subscription through a cloud provider marketplace, such as Red Hat Marketplace or Amazon Web Services (AWS). Examples of these types of products include Red Hat OpenShift AI or Red Hat Advanced Cluster Security for Kubernetes. As these products begin reporting usage through the monitoring stack, the subscriptions service activates automatically for the organization. Creating an Amazon Web Services integration through the integrations service in the Hybrid Cloud Console with the RHEL management bundle selected. The process of creating the integration also activates the subscriptions service. Creating a Microsoft Azure integration through the integrations service in the Hybrid Cloud Console with the RHEL management bundle selected. The process of creating the integration also activates the subscriptions service. Note The integrations service was formerly known as the {SourcesName} service in the Hybrid Cloud Console. These tasks, especially purchasing tasks, might be performed by a user that has the Organization Administrator (org admin) role in the Red Hat organization for your company. The integration creation tasks must be performed by a user with the Cloud administrator role in the role-based access control (RBAC) system for the Hybrid Cloud Console. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-determining-whether-manual-activation-subscriptionwatch-necessary_assembly-activating-opening-subscriptionwatch-ctxt |
Chapter 14. Managing container storage interface (CSI) component placements | Chapter 14. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. | [
"oc edit configmap rook-ceph-operator-config -n openshift-storage",
"oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml",
"apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]",
"oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>",
"oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/managing-container-storage-interface-component-placements_rhodf |
Chapter 9. TokenRequest [authentication.k8s.io/v1] | Chapter 9. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 9.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 9.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 9.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 9.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 9.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 9.1. Global path parameters Parameter Type Description name string name of the TokenRequest Table 9.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create token of a ServiceAccount Table 9.3. Body parameters Parameter Type Description body TokenRequest schema Table 9.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/tokenrequest-authentication-k8s-io-v1 |
32.9. Making the Kickstart File Available | 32.9. Making the Kickstart File Available A kickstart file must be placed in one of the following locations: On removable media, such as a floppy disk, optical disk, or USB flash drive On a hard drive On a network Normally a kickstart file is copied to the removable media or hard drive, or made available on the network. The network-based approach is most commonly used, as most kickstart installations tend to be performed on networked computers. The following section provides a more in-depth look at where the kickstart file may be placed. 32.9.1. Creating Kickstart Boot Media If you want to modify boot media provided by Red Hat to include a Kickstart file and automatically load it during boot, follow the procedure below. Note that this procedure will only work on AMD and Intel systems ( x86 and x86_64 ). Additionally, this procedure requires the genisoimage and isomd5sum packages; these packages are available on Red Hat Enterprise Linux, but if you use a different system, you may need to adjust the commands used. Note Diskette-based booting is no longer supported in Red Hat Enterprise Linux. Installations must use CD-ROM or flash memory products for booting. However, the kickstart file may still reside on a diskette's top-level directory, and must be named ks.cfg . Separate boot media will be required. Procedure 32.1. Including a Kickstart File on Boot Media Before you start the procedure, make sure you have downloaded a boot ISO image (boot.iso or binary DVD) as described in Chapter 1, Obtaining Red Hat Enterprise Linux , and that you have created a working Kickstart file. Mount the ISO image you have downloaded: Extract the ISO image into a working directory somewhere in your system: Unmount the mounted image: The contents of the image is now placed in the iso/ directory in your working directory. Add your Kickstart file ( ks.cfg ) into the iso/ directory: Open the isolinux/isolinux.cfg configuration file inside the iso/ directory. This file determines all the menu options which appear in the boot menu. A single menu entry is defined as the following: Add the ks= boot option to the line beginning with append . The exact syntax depends on how you plan to boot the ISO image; for example, if you plan on booting from a CD or DVD, use ks=cdrom:/ks.cfg . A list of possible sources and the syntax used to configure them is available in Section 28.4, "Automating the Installation with Kickstart" . Use genisoimage in the iso/ directory to create a new bootable ISO image with your changes included: This comand will create a file named NEWISO.iso in your working directory (one directory above the iso/ directory). Important If you use a disk label to refer to any device in your isolinux.cfg (e.g. ks=hd:LABEL=RHEL-6.9/ks.cfg , make sure that the label matches the label of the new ISO you are creating. Also note that in boot loader configuration, spaces in labels must be replaced with \x20 . Implant a md5 checksum into the new ISO image: After you finish the above procedure, your new image is ready to be turned into boot media. Refer to Chapter 2, Making Media for instructions. To perform a pen-based flash memory kickstart installation, the kickstart file must be named ks.cfg and must be located in the flash memory's top-level directory. The kickstart file should be on a separate flash memory drive to the boot media. To start the Kickstart installation, boot the system using the boot media you created, and use the ks= boot option to specify which device contains the USB drive. See Section 28.4, "Automating the Installation with Kickstart" for details about the ks= boot option. See Section 2.2, "Making Minimal Boot Media" for instructions on creating boot USB media using the rhel- variant - version - architecture -boot.iso image file that you can download from the Software & Download Center of the Red Hat customer portal. Note Creation of USB flashdrives for booting is possible, but is heavily dependent on system hardware BIOS settings. Refer to your hardware manufacturer to see if your system supports booting to alternate devices. | [
"mount /path/to/image.iso /mnt/iso",
"cp -pRf /mnt/iso /tmp/workdir",
"umount /mnt/iso",
"cp /path/to/ks.cfg /tmp/workdir/iso",
"label linux menu label ^Install or upgrade an existing system menu default kernel vmlinuz append initrd=initrd.img",
"genisoimage -U -r -v -T -J -joliet-long -V \"RHEL-6.9\" -volset \"RHEL-6.9\" -A \"RHEL-6.9\" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .",
"implantisomd5 ../NEWISO.iso"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-putkickstarthere |
Chapter 9. Configuring image streams and image registries | Chapter 9. Configuring image streams and image registries You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. For more information, see Using image pull secrets . For information about images and configuring image streams or image registries, see the following documentation: Overview of images Image Registry Operator in OpenShift Container Platform Configuring image registry settings 9.1. Configuring image streams for a disconnected cluster After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather image stream. 9.1.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 9.1.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 9.1.3. Preparing your cluster to gather support data Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository. Procedure If you have not added your mirror registry's trusted CA to your cluster's image configuration object as part of the Cluster Samples Operator configuration, perform the following steps: Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Import the default must-gather image from your installation payload: USD oc import-image is/must-gather -n openshift When running the oc adm must-gather command, use the --image flag and point to the payload image, as in the following example: USD oc adm must-gather --image=USD(oc adm release info --image-for must-gather) 9.2. Configuring periodic importing of Cluster Sample Operator image stream tags You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available. Procedure Fetch all the imagestreams in the openshift namespace by running the following command: oc get imagestreams -nopenshift Fetch the tags for every imagestream in the openshift namespace by running the following command: USD oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift For example: USD oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift Example output 1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12 Schedule periodic importing of images for each tag present in the image stream by running the following command: USD oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift For example: USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Verify the scheduling status of the periodic import by running the following command: oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift For example: oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift Example output Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true | [
"oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io",
"oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=USD(oc adm release info --image-for must-gather)",
"get imagestreams -nopenshift",
"oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12",
"oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift",
"oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift",
"get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/post-installation_configuration/post-install-image-config |
Chapter 1. Understanding OpenShift updates | Chapter 1. Understanding OpenShift updates 1.1. Introduction to OpenShift updates With OpenShift Container Platform 4, you can update an OpenShift Container Platform cluster with a single operation by using the web console or the OpenShift CLI ( oc ). Platform administrators can view new update options either by going to Administration Cluster Settings in the web console or by looking at the output of the oc adm upgrade command. Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities based on the OpenShift Container Platform release images in the official registry. The graph contains update information for any public OCP release. OpenShift Container Platform clusters are configured to connect to the OSUS by default, and the OSUS responds to clusters with information about known update targets. An update begins when either a cluster administrator or an automatic update controller edits the custom resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with the newly specified version, the CVO retrieves the target release image from an image registry and begins to apply changes to the cluster. Note Operators previously installed through Operator Lifecycle Manager (OLM) follow a different process for updates. See Updating installed Operators for more information. The target release image contains manifest files for all cluster components that form a specific OCP version. When updating the cluster to a new version, the CVO applies manifests in separate stages called Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new specified version. The CVO monitors the state of each applied resource and the states reported by all cluster Operators. The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel reach a stable condition. After the CVO updates the entire control plane through this process, the Machine Config Operator (MCO) updates the operating system and configuration of every node in the cluster. 1.1.1. Common questions about update availability There are several factors that affect if and when an update is made available to an OpenShift Container Platform cluster. The following list provides common questions regarding the availability of an update: What are the differences between each of the update channels? A new release is initially added to the candidate channel. After successful final testing, a release on the candidate channel is promoted to the fast channel, an errata is published, and the release is now fully supported. After a delay, a release on the fast channel is finally promoted to the stable channel. This delay represents the only difference between the fast and stable channels. Note For the latest z-stream releases, this delay may generally be a week or two. However, the delay for initial updates to the latest minor version may take much longer, generally 45-90 days. Releases promoted to the stable channel are simultaneously promoted to the eus channel. The primary purpose of the eus channel is to serve as a convenience for clusters performing a Control Plane Only update. Is a release on the stable channel safer or more supported than a release on the fast channel? If a regression is identified for a release on a fast channel, it will be resolved and managed to the same extent as if that regression was identified for a release on the stable channel. The only difference between releases on the fast and stable channels is that a release only appears on the stable channel after it has been on the fast channel for some time, which provides more time for new update risks to be discovered. A release that is available on the fast channel always becomes available on the stable channel after this delay. What does it mean if an update has known issues? Red Hat continuously evaluates data from multiple sources to determine whether updates from one version to another have any declared issues. Identified issues are typically documented in the version's release notes. Even if the update path has known issues, customers are still supported if they perform the update. Red Hat does not block users from updating to a certain version. Red Hat may declare conditional update risks, which may or may not apply to a particular cluster. Declared risks provide cluster administrators more context about a supported update. Cluster administrators can still accept the risk and update to that particular target version. What if I see that an update to a particular release is no longer recommended? If Red Hat removes update recommendations from any supported release due to a regression, a superseding update recommendation will be provided to a future version that corrects the regression. There may be a delay while the defect is corrected, tested, and promoted to your selected channel. How long until the z-stream release is made available on the fast and stable channels? While the specific cadence can vary based on a number of factors, new z-stream releases for the latest minor version are typically made available about every week. Older minor versions, which have become more stable over time, may take much longer for new z-stream releases to be made available. Important These are only estimates based on past data about z-stream releases. Red Hat reserves the right to change the release frequency as needed. Any number of issues could cause irregularities and delays in this release cadence. Once a z-stream release is published, it also appears in the fast channel for that minor version. After a delay, the z-stream release may then appear in that minor version's stable channel. Additional resources Understanding update channels and releases 1.1.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue related to the update path, such as incompatibility or availability. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 1.1.3. Understanding cluster Operator condition types The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted. The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the OpenShift Container Platform cluster. Available: The condition type Available indicates that an Operator is functional and available in the cluster. If the status is False , at least one part of the operand is non-functional and the condition requires an administrator to intervene. Progressing: The condition type Progressing indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another. Operators do not report the condition type Progressing as True when they are reconciling a known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as True , since it is moving from one steady state to another. Degraded: The condition type Degraded indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a Degraded status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the Degraded state. There might be a different condition type if the transition from one state to another does not persist over a long enough period to report Degraded . An Operator does not report Degraded during the course of a normal update. An Operator may report Degraded in response to a persistent infrastructure failure that requires eventual administrator intervention. Note This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is True , Unknown or missing. When the Upgradeable status is False , only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced. 1.1.4. Understanding cluster version condition types The Cluster Version Operator (CVO) monitors cluster Operators and other components, and is responsible for collecting the status of both the cluster version and its Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. In addition to Available , Progressing , and Upgradeable , there are condition types that affect cluster versions and Operators. Failing: The cluster version condition type Failing indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. Invalid: The cluster version condition type Invalid indicates that the cluster version has an error that prevents the server from taking action. The CVO only reconciles the current state as long as this condition is set. RetrievedUpdates: The cluster version condition type RetrievedUpdates indicates whether or not available updates have been retrieved from the upstream update server. The condition is Unknown before retrieval, False if the updates either recently failed or could not be retrieved, or True if the availableUpdates field is both recent and accurate. ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status indicates that the requested release payload was successfully loaded without failure during image verification and precondition checking. ImplicitlyEnabledCapabilities: The cluster version condition type ImplicitlyEnabledCapabilities with a True status indicates that there are enabled capabilities that the user is not currently requesting through spec.capabilities . The CVO does not support disabling capabilities if any associated resources were previously managed by the CVO. 1.1.5. Common terms Control plane The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. Cluster Version Operator The Cluster Version Operator (CVO) starts the update process for the cluster. It checks with OSUS based on the current cluster version and retrieves the graph which contains available or possible update paths. Machine Config Operator The Machine Config Operator (MCO) is a cluster-level Operator that manages the operating system and machine configurations. Through the MCO, platform administrators can configure and update systemd, CRI-O and Kubelet, the kernel, NetworkManager, and other system features on the worker nodes. OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including to Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. Channels Channels declare an update strategy tied to minor versions of OpenShift Container Platform. The OSUS uses this configured strategy to recommend update edges consistent with that strategy. Recommended update edge A recommended update edge is a recommended update between OpenShift Container Platform releases. Whether a given update is recommended can depend on the cluster's configured channel, current version, known bugs, and other information. OSUS communicates the recommended edges to the CVO, which runs in every cluster. Additional resources Machine Config Overview Using the OpenShift Update Service in a disconnected environment Update channels 1.1.6. Additional resources How cluster updates work . 1.2. How cluster updates work The following sections describe each major aspect of the OpenShift Container Platform (OCP) update process in detail. For a general overview of how updates work, see the Introduction to OpenShift updates . 1.2.1. The Cluster Version Operator The Cluster Version Operator (CVO) is the primary component that orchestrates and facilitates the OpenShift Container Platform update process. During installation and standard cluster operation, the CVO is constantly comparing the manifests of managed cluster Operators to in-cluster resources, and reconciling discrepancies to ensure that the actual state of these resources match their desired state. 1.2.1.1. The ClusterVersion object One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion resource. Administrators and OpenShift components can communicate or interact with the CVO through the ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the current CVO state is reflected in the object's status. Note Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc CLI or the web console to declare your update target. The CVO continually reconciles the cluster with the target state declared in the spec property of the ClusterVersion resource. When the desired release differs from the actual release, that reconciliation updates the cluster. Update availability data The ClusterVersion resource also contains information about updates that are available to the cluster. This includes updates that are available, but not recommended due to a known risk that applies to the cluster. These updates are known as conditional updates. To learn how the CVO maintains this information about available updates in the ClusterVersion resource, see the "Evaluation of update availability" section. You can inspect all available updates with the following command: USD oc adm upgrade --include-not-recommended Note The additional --include-not-recommended parameter includes updates that are available with known issues that apply to the cluster. Example output Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 The oc adm upgrade command queries the ClusterVersion resource for information about available updates and presents it in a human-readable format. One way to directly inspect the underlying availability data created by the CVO is by querying the ClusterVersion resource with the following command: USD oc get clusterversion version -o json | jq '.status.availableUpdates' Example output [ { "channels": [ "candidate-4.11", "candidate-4.12", "fast-4.11", "fast-4.12" ], "image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775", "url": "https://access.redhat.com/errata/RHBA-2023:3213", "version": "4.11.41" }, ... ] A similar command can be used to check conditional updates: USD oc get clusterversion version -o json | jq '.status.conditionalUpdates' Example output [ { "conditions": [ { "lastTransitionTime": "2023-05-30T16:28:59Z", "message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136", "reason": "PatchesOlderRelease", "status": "False", "type": "Recommended" } ], "release": { "channels": [...], "image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d", "url": "https://access.redhat.com/errata/RHBA-2023:1733", "version": "4.11.36" }, "risks": [...] }, ... ] 1.2.1.2. Evaluation of update availability The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities. This data is based on the cluster's subscribed channel. The CVO then saves information about update recommendations into either the availableUpdates or conditionalUpdates field of its ClusterVersion resource. The CVO periodically checks the conditional updates for update risks. These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version. Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform. The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the availableUpdates field of its ClusterVersion resource. The user interface, either the web console or the OpenShift CLI ( oc ), presents this information in sectioned headings to the administrator. Each known issue associated with the update path contains a link to further resources about the risk so that the administrator can make an informed decision about the update. Additional resources Update recommendation removals and Conditional Updates 1.2.2. Release images A release image is the delivery mechanism for a specific OpenShift Container Platform (OCP) version. It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version. You can inspect the content of a specific release image by running the following command: USD oc adm release extract <release image> Example output USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z USD ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 ... 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata 1 Manifest for ClusterResourceQuota CRD, to be applied on Runlevel 03 2 Manifest for PrometheusRoleBinding resource for the service-ca-operator , to be applied on Runlevel 90 3 List of SHA digest-versioned references to all required images 1.2.3. Update process workflow The following steps represent a detailed workflow of the OpenShift Container Platform (OCP) update process: The target version is stored in the spec.desiredUpdate.version field of the ClusterVersion resource, which may be managed through the web console or the CLI. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion resource differs from the current cluster version. Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image. The CVO validates the integrity and authenticity of the release image. Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image. The CVO creates a job named version-USDversion-USDhash in the openshift-cluster-version namespace. This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime. The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO. The CVO validates the extracted manifests and metadata. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster. Certain conditions can prevent updates from proceeding. These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update. The CVO records the accepted release in status.desired and creates a status.history entry about the new update. The CVO begins reconciling the manifests from the release image. Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the level. Manifests for the CVO itself are applied early in the process. When the CVO deployment is applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new CVO proceeds to reconcile the remaining manifests. The update proceeds until the entire control plane is updated to the new version. Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the Progressing=True condition. The Machine Config Operator (MCO) manifests are applied towards the end of the process. The updated MCO then begins updating the system configuration and operating system of every node. Each node might be drained, updated, and rebooted before it starts to accept workloads again. The cluster reports as updated after the control plane update is finished, usually before all nodes are updated. After the update, the CVO maintains all cluster resources to match the state delivered in the release image. 1.2.4. Understanding how manifests are applied during an update Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them. For example, the CustomResourceDefinition resource must be created before the matching custom resources. Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster. The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels. These dependencies are encoded in the filenames of the manifests in the release image: 0000_<runlevel>_<component>_<manifest-name>.yaml For example: 0000_03_config-operator_01_proxy.crd.yaml The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules: During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel. Within one Runlevel, manifests for different components can be applied in parallel. Within one Runlevel, manifests for a single component are applied in lexicographic order. The CVO then applies manifests following the generated dependency graph. Note For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state. Achieving this state can take some time. This is especially true for ClusterOperator resources, while the CVO waits for a cluster Operator to update itself and then update its ClusterOperator status. The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the Runlevel: The cluster Operators have an Available=True condition. The cluster Operators have a Degraded=False condition. The cluster Operators declare they have achieved the desired version in their ClusterOperator resource. Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release's manifests is expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform update duration for more information about factors that influence update duration. In the example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-operator ClusterOperator declares this progress through the Progressing=True condition and by not declaring the new version as reconciled in its status.versions . The CVO waits until the ClusterOperator reports an acceptable status, and then it will start reconciling manifests at Runlevel 25. Additional resources Understanding OpenShift Container Platform update duration 1.2.5. Understanding how the Machine Config Operator updates nodes The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The .spec.maxUnavailable parameter, which has a default value of 1 , determines how many nodes in a machine config pool can simultaneously undergo the update process. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of .spec.maxUnavailable , the MCO initiates the following sequence of actions on available nodes in the pool: Cordon and drain the node Note When a node is cordoned, workloads cannot be scheduled to it. Update the system configuration and operating system (OS) of the node Reboot the node Uncordon the node A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of .spec.maxUnavailable . As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than .spec.maxUnavailable . If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is once again reached. This process repeats until each control plane node and compute node has been updated. The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where .spec.maxUnavailable is 3 and all nodes are initially available: The MCO cordons nodes 1, 2, and 3, and begins to drain them. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it. Node 3 finishes draining, reboots, and becomes available again. Node 5 finishes draining, reboots, and becomes available again. Node 4 finishes draining, reboots, and becomes available again. Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO. You can check the status of the machine configuration update by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Additional resources Machine Config Overview 1.3. Understanding update channels and releases Update channels are the mechanism by which users declare the OpenShift Container Platform minor version they intend to update their clusters to. They also allow users to choose the timing and level of support their updates will have through the fast , stable , candidate , and eus channel options. The Cluster Version Operator uses an update graph based on the channel declaration, along with other conditional information, to provide a list of recommended and conditional updates available to the cluster. Update channels correspond to a minor version of OpenShift Container Platform. The version number in the channel represents the target minor version that the cluster will eventually be updated to, even if it is higher than the cluster's current minor version. For instance, OpenShift Container Platform 4.10 update channels provide the following recommendations: Updates within 4.10. Updates within 4.9. Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do not immediately meet the minimum z-stream version requirements. eus-4.10 only: updates within 4.8. eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to 4.10. 4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that administrators must explicitly decide to update to the minor version of OpenShift Container Platform. Update channels control only release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.18 offers the following update channels: stable-4.18 eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions) fast-4.18 candidate-4.18 If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service. Warning Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 1.3.1. Update channels 1.3.1.1. fast-4.18 channel The fast-4.18 channel is updated with new versions of OpenShift Container Platform 4.18 as soon as Red Hat declares the version as a general availability (GA) release. As such, these releases are fully supported and purposed to be used in production environments. 1.3.1.2. stable-4.18 channel While the fast-4.18 channel contains releases as soon as their errata are published, releases are added to the stable-4.18 channel after a delay. During this delay, data is collected from multiple sources and analyzed for indications of product regressions. Once a significant number of data points have been collected, these releases are added to the stable channel. Note Since the time required to obtain a significant number of data points varies based on many factors, Service LeveL Objective (SLO) is not offered for the delay duration between fast and stable channels. For more information, please see "Choosing the correct channel for your cluster" Newly installed clusters default to using stable channels. 1.3.1.3. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer Extended Update Support (EUS). Releases promoted to the stable channel are also simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a convenience for clusters performing a Control Plane Only update. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. 1.3.1.4. candidate-4.18 channel The candidate-4.18 channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels may not contain the full feature set of eventual GA releases or features may be removed prior to GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. 1.3.1.5. Update recommendations in the channel OpenShift Container Platform maintains an update recommendation service that knows your installed OpenShift Container Platform version and the path to take within the channel to get you to the release. Update paths are also limited to versions relevant to your currently selected channel and its promotion characteristics. You can imagine seeing the following releases in your channel: 4.18.0 4.18.1 4.18.3 4.18.4 The service recommends only updates that have been tested and have no known serious regressions. For example, if your cluster is on 4.18.1 and OpenShift Container Platform suggests 4.18.4, then it is recommended to update from 4.18.1 to 4.18.4. Important Do not rely on consecutive patch numbers. In this example, 4.18.2 is not and never was available in the channel, therefore updates to 4.18.2 are not recommended or supported. 1.3.1.6. Update recommendations and Conditional Updates Red Hat monitors newly released versions and update paths associated with those versions before and after they are added to supported channels. If Red Hat removes update recommendations from any supported release, a superseding update recommendation will be provided to a future version that corrects the regression. There may however be a delay while the defect is corrected, tested, and promoted to your selected channel. Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only clusters matching certain conditions. Some examples include having the Platform set to None or the CNI provider set to OpenShiftSDN . The Cluster Version Operator (CVO) continually evaluates known risks against the current cluster state. If no risks match, the update is recommended. If the risk matches, those update paths are labeled as updates with known issues , and a reference link to the known issues is provided. The reference link helps the cluster admin decide if they want to accept the risk and continue to update their cluster. When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels simultaneously. Declaration of a Conditional Update risk may happen either before or after the update has been promoted to supported channels. 1.3.1.7. Choosing the correct channel for your cluster Choosing the appropriate channel involves two decisions. First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the version, or the EUS version. Note Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single Control Plane Only update are not offered. Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the fast-4.18 and stable-4.18 are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans. Additionally, there are several factors which may lead an organization to move clusters to the fast channel either permanently or temporarily including: The desire to apply a specific fix known to affect your environment without delay. Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion delays still apply to z-streams with CVE fixes. Internal testing processes. If it takes your organization several weeks to qualify releases it is best test concurrently with our promotion process rather than waiting. This also assures that any telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can be fixed faster. 1.3.1.8. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During an update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 1.3.1.9. Switching between channels A channel can be switched from the web console or through the adm upgrade channel command: USD oc adm upgrade channel <channel> The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.18 channel to the fast-4.18 channel. You can switch to the candidate-4.18 channel at any time, but some releases for this channel might be unsupported. You can switch from the candidate-4.18 channel to the fast-4.18 channel if your current release is a general availability release. You can always switch from the fast-4.18 channel to the stable-4.18 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.18 if the current release was recently promoted. Additional resources Updating along a conditional upgrade path Choosing the correct channel for your cluster 1.4. Understanding OpenShift Container Platform update duration OpenShift Container Platform update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment. 1.4.1. Factors affecting update duration The following factors can affect your cluster update duration: The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO) The value of MaxUnavailable in the machine config pool Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. The minimum number or percentages of replicas set in pod disruption budget (PDB) The number of nodes in the cluster The health of the cluster nodes 1.4.2. Cluster update phases In OpenShift Container Platform, the cluster update happens in two phases: Cluster Version Operator (CVO) target update payload deployment Machine Config Operator (MCO) node updates 1.4.2.1. Cluster Version Operator target update payload deployment The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes. Note The CVO phase of the update does not restart the nodes. 1.4.2.2. Machine Config Operator node updates The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster: Cordon and drain all the nodes Update the operating system (OS) Reboot the nodes Uncordon all nodes and schedule workloads on the node Note When a node is cordoned, workloads cannot be scheduled to it. The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node. In addition to MCO, you should consider the impact of the following parameters: The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains. You can update the compute nodes in parallel by setting the maxUnavailable field to greater than 1 in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in maxUnavailable and marks them unavailable for update. When you increase maxUnavailable on the MCP, it can help the pool to update more quickly. However, if maxUnavailable is set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increase maxUnavailable for the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain. Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the maxUnavailable and pod disruption budgets. To check the status of nodes from the terminal, run the following command: USD oc get node Example Output NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb If the status of the node is NotReady or SchedulingDisabled , then the node is not available and this impacts the update duration. You can check the status of nodes from the Administrator perspective in the web console by expanding Compute Nodes . Additional resources Machine Config Overview Pod disruption budget 1.4.2.3. Example update duration of cluster Operators The diagram shows an example of the time that cluster Operators might take to update to their new versions. The example is based on a three-node AWS OVN cluster, which has a healthy compute MachineConfigPool and no workloads that take long to drain, updating from 4.13 to 4.14. Note The specific update duration of a cluster and its Operators can vary based on several cluster characteristics, such as the target version, the amount of nodes, and the types of workloads scheduled to the nodes. Some Operators, such as the Cluster Version Operator, update themselves in a short amount of time. These Operators have either been omitted from the diagram or are included in the broader group of Operators labeled "Other Operators in parallel". Each cluster Operator has characteristics that affect the time it takes to update itself. For instance, the Kube API Server Operator in this example took more than eleven minutes to update because kube-apiserver provides graceful termination support, meaning that existing, in-flight requests are allowed to complete gracefully. This might result in a longer shutdown of the kube-apiserver . In the case of this Operator, update speed is sacrificed to help prevent and limit disruptions to cluster functionality during an update. Another characteristic that affects the update duration of an Operator is whether the Operator utilizes DaemonSets. The Network and DNS Operators utilize full-cluster DaemonSets, which can take time to roll out their version changes, and this is one of several reasons why these Operators might take longer to update themselves. The update duration for some Operators is heavily dependent on characteristics of the cluster itself. For instance, the Machine Config Operator update applies machine configuration changes to each node in the cluster. A cluster with many nodes has a longer update duration for the Machine Config Operator compared to a cluster with fewer nodes. Note Each cluster Operator is assigned a stage during which it can be updated. Operators within the same stage can update simultaneously, and Operators in a given stage cannot begin updating until all stages have been completed. For more information, see "Understanding how manifests are applied during an update" in the "Additional resources" section. Additional resources Introduction to OpenShift updates Understanding how manifests are applied during an update 1.4.3. Estimating cluster update time Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time: A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the maxUnavailable value. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. For example, to estimate the update time, consider an OpenShift Container Platform cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot. Note The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes. Scenario-1 When you set maxUnavailable to 1 for both the control plane and compute nodes Machine Config Pool (MCP), then all the six compute nodes will update one after another in each iteration: Scenario-2 When you set maxUnavailable to 2 for the compute node MCP, then two compute nodes will update in parallel in each iteration. Therefore it takes total three iterations to update all the nodes. Important The default setting for maxUnavailable is 1 for all the MCPs in OpenShift Container Platform. It is recommended that you do not change the maxUnavailable in the control plane MCP. 1.4.4. Red Hat Enterprise Linux (RHEL) compute nodes Red Hat Enterprise Linux (RHEL) compute nodes require an additional usage of openshift-ansible to update node binary components. The actual time spent updating RHEL compute nodes should not be significantly different from Red Hat Enterprise Linux CoreOS (RHCOS) compute nodes. Additional resources Updating RHEL compute machines 1.4.5. Additional resources OpenShift Container Platform architecture OpenShift Container Platform updates | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/updating_clusters/understanding-openshift-updates-1 |
10.3. Using the Cache With NFS | 10.3. Using the Cache With NFS NFS will not use the cache unless explicitly instructed. To configure an NFS mount to use FS-Cache, include the -o fsc option to the mount command: All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O or writing (refer to Section 10.3.2, "Cache Limitations With NFS" for more information). NFS indexes cache contents using NFS file handle, not the file name; this means that hard-linked files share the cache correctly. Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches for caching. 10.3.1. Cache Sharing There are several potential issues to do with NFS cache sharing. Because the cache is persistent, blocks of data in the cache are indexed on a sequence of four keys: Level 1: Server details Level 2: Some mount options; security type; FSID; uniquifier Level 3: File Handle Level 4: Page number in file To avoid coherency management problems between superblocks, all NFS superblocks that wish to cache data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options will share a superblock, and thus share the caching, even if they mount different directories within that volume. Example 10.1. Cache sharing Take the following two mount commands: mount home0:/disk0/fred /home/fred -o fsc mount home0:/disk0/jim /home/jim -o fsc Here, /home/fred and /home/jim will likely share the superblock as they have the same options, especially if they come from the same volume/partition on the NFS server ( home0 ). Now, consider the two subsequent mount commands: mount home0:/disk0/fred /home/fred -o fsc,rsize=230 mount home0:/disk0/jim /home/jim -o fsc,rsize=231 In this case, /home/fred and /home/jim will not share the superblock as they have different network access parameters, which are part of the Level 2 key. The same goes for the following mount sequence: mount home0:/disk0/fred /home/fred1 -o fsc,rsize=230 mount home0:/disk0/fred /home/fred2 -o fsc,rsize=231 Here, the contents of the two subtrees ( /home/fred1 and /home/fred2 ) will be cached twice . Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache parameter. Using the same example: mount home0:/disk0/fred /home/fred -o nosharecache,fsc mount home0:/disk0/jim /home/jim -o nosharecache,fsc However, in this case only one of the superblocks will be permitted to use cache since there is nothing to distinguish the Level 2 keys of home0:/disk0/fred and home0:/disk0/jim . To address this, add a unique identifier on at least one of the mounts, i.e. fsc= unique-identifier . For example: mount home0:/disk0/fred /home/fred -o nosharecache,fsc mount home0:/disk0/jim /home/jim -o nosharecache,fsc=jim Here, the unique identifier jim will be added to the Level 2 key used in the cache for /home/jim . | [
"mount nfs-share :/ /mount/point -o fsc"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fscachenfs |
Preface | Preface The Red Hat OpenShift AI Add-on is automatically updated as new releases or versions become available. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/upgrading_openshift_ai_cloud_service/pr01 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jlink_to_customize_java_runtime_environment/making-open-source-more-inclusive |
Chapter 138. KafkaMirrorMaker2MirrorSpec schema reference | Chapter 138. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Description sourceCluster The alias of the source cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . string targetCluster The alias of the target cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . string sourceConnector The specification of the Kafka MirrorMaker 2 source connector. KafkaMirrorMaker2ConnectorSpec heartbeatConnector The specification of the Kafka MirrorMaker 2 heartbeat connector. KafkaMirrorMaker2ConnectorSpec checkpointConnector The specification of the Kafka MirrorMaker 2 checkpoint connector. KafkaMirrorMaker2ConnectorSpec topicsPattern A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. string topicsBlacklistPattern The topicsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.topicsExcludePattern . A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string topicsExcludePattern A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string groupsPattern A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. string groupsBlacklistPattern The groupsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.groupsExcludePattern . A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string groupsExcludePattern A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaMirrorMaker2MirrorSpec-reference |
Chapter 9. Remote worker nodes on the network edge | Chapter 9. Remote worker nodes on the network edge 9.1. Using remote worker nodes at the network edge You can configure OpenShift Container Platform clusters with nodes located at your network edge. In this topic, they are called remote worker nodes . A typical cluster with remote worker nodes combines on-premise master and worker nodes with worker nodes in other locations that connect to the cluster. This topic is intended to provide guidance on best practices for using remote worker nodes and does not contain specific configuration details. There are multiple use cases across different industries, such as telecommunications, retail, manufacturing, and government, for using a deployment pattern with remote worker nodes. For example, you can separate and isolate your projects and workloads by combining the remote worker nodes into Kubernetes zones . However, having remote worker nodes can introduce higher latency, intermittent loss of network connectivity, and other issues. Among the challenges in a cluster with remote worker node are: Network separation : The OpenShift Container Platform control plane and the remote worker nodes must be able communicate with each other. Because of the distance between the control plane and the remote worker nodes, network issues could prevent this communication. See Network separation with remote worker nodes for information on how OpenShift Container Platform responds to network separation and for methods to diminish the impact to your cluster. Power outage : Because the control plane and remote worker nodes are in separate locations, a power outage at the remote location or at any point between the two can negatively impact your cluster. See Power loss on remote worker nodes for information on how OpenShift Container Platform responds to a node losing power and for methods to diminish the impact to your cluster. Latency spikes or temporary reduction in throughput : As with any network, any changes in network conditions between your cluster and the remote worker nodes can negatively impact your cluster. OpenShift Container Platform offers multiple worker latency profiles that let you control the reaction of the cluster to latency issues. Note the following limitations when planning a cluster with remote worker nodes: OpenShift Container Platform does not support remote worker nodes that use a different cloud provider than the on-premise cluster uses. Moving workloads from one Kubernetes zone to a different Kubernetes zone can be problematic due to system and environment issues, such as a specific type of memory not being available in a different zone. Proxies and firewalls can present additional limitations that are beyond the scope of this document. See the relevant OpenShift Container Platform documentation for how to address such limitations, such as Configuring your firewall . You are responsible for configuring and maintaining L2/L3-level network connectivity between the control plane and the network-edge nodes. 9.1.1. Adding remote worker nodes Adding remote worker nodes to a cluster involves some additional considerations. You must ensure that a route or a default gateway is in place to route traffic between the control plane and every remote worker node. You must place the Ingress VIP on the control plane. Adding remote worker nodes with user-provisioned infrastructure is identical to adding other worker nodes. To add remote worker nodes to an installer-provisioned cluster at install time, specify the subnet for each worker node in the install-config.yaml file before installation. There are no additional settings required for the DHCP server. You must use virtual media, because the remote worker nodes will not have access to the local provisioning network. To add remote worker nodes to an installer-provisioned cluster deployed with a provisioning network, ensure that virtualMediaViaExternalNetwork flag is set to true in the install-config.yaml file so that it will add the nodes using virtual media. Remote worker nodes will not have access to the local provisioning network. They must be deployed with virtual media rather than PXE. Additionally, specify each subnet for each group of remote worker nodes and the control plane nodes in the DHCP server. Additional resources Establishing communications between subnets Configuring host network interfaces for subnets Configuring network components to run on the control plane 9.1.2. Network separation with remote worker nodes All nodes send heartbeats to the Kubernetes Controller Manager Operator (kube controller) in the OpenShift Container Platform cluster every 10 seconds. If the cluster does not receive heartbeats from a node, OpenShift Container Platform responds using several default mechanisms. OpenShift Container Platform is designed to be resilient to network partitions and other disruptions. You can mitigate some of the more common disruptions, such as interruptions from software upgrades, network splits, and routing issues. Mitigation strategies include ensuring that pods on remote worker nodes request the correct amount of CPU and memory resources, configuring an appropriate replication policy, using redundancy across zones, and using Pod Disruption Budgets on workloads. If the kube controller loses contact with a node after a configured period, the node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition as Unknown . In response, the scheduler stops scheduling pods to that node. The on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules pods on the node for eviction after five minutes, by default. If a workload controller, such as a Deployment object or StatefulSet object, is directing traffic to pods on the unhealthy node and other nodes can reach the cluster, OpenShift Container Platform routes the traffic away from the pods on the node. Nodes that cannot reach the cluster do not get updated with the new traffic routing. As a result, the workloads on those nodes might continue to attempt to reach the unhealthy node. You can mitigate the effects of connection loss by: using daemon sets to create pods that tolerate the taints using static pods that automatically restart if a node goes down using Kubernetes zones to control pod eviction configuring pod tolerations to delay or avoid pod eviction configuring the kubelet to control the timing of when it marks nodes as unhealthy. For more information on using these objects in a cluster with remote worker nodes, see About remote worker node strategies . 9.1.3. Power loss on remote worker nodes If a remote worker node loses power or restarts ungracefully, OpenShift Container Platform responds using several default mechanisms. If the Kubernetes Controller Manager Operator (kube controller) loses contact with a node after a configured period, the control plane updates the node health to Unhealthy and marks the node Ready condition as Unknown . In response, the scheduler stops scheduling pods to that node. The on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules pods on the node for eviction after five minutes, by default. On the node, the pods must be restarted when the node recovers power and reconnects with the control plane. Note If you want the pods to restart immediately upon restart, use static pods. After the node restarts, the kubelet also restarts and attempts to restart the pods that were scheduled on the node. If the connection to the control plane takes longer than the default five minutes, the control plane cannot update the node health and remove the node.kubernetes.io/unreachable taint. On the node, the kubelet terminates any running pods. When these conditions are cleared, the scheduler can start scheduling pods to that node. You can mitigate the effects of power loss by: using daemon sets to create pods that tolerate the taints using static pods that automatically restart with a node configuring pods tolerations to delay or avoid pod eviction configuring the kubelet to control the timing of when the node controller marks nodes as unhealthy. For more information on using these objects in a cluster with remote worker nodes, see About remote worker node strategies . 9.1.4. Latency spikes or temporary reduction in throughput to remote workers If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. Additional resources Improving cluster stability in high latency environments using worker latency profiles 9.1.5. Remote worker node strategies If you use remote worker nodes, consider which objects to use to run your applications. It is recommended to use daemon sets or static pods based on the behavior you want in the event of network issues or power loss. In addition, you can use Kubernetes zones and tolerations to control or avoid pod evictions if the control plane cannot reach remote worker nodes. Daemon sets Daemon sets are the best approach to managing pods on remote worker nodes for the following reasons: Daemon sets do not typically need rescheduling behavior. If a node disconnects from the cluster, pods on the node can continue to run. OpenShift Container Platform does not change the state of daemon set pods, and leaves the pods in the state they last reported. For example, if a daemon set pod is in the Running state, when a node stops communicating, the pod keeps running and is assumed to be running by OpenShift Container Platform. Daemon set pods, by default, are created with NoExecute tolerations for the node.kubernetes.io/unreachable and node.kubernetes.io/not-ready taints with no tolerationSeconds value. These default values ensure that daemon set pods are never evicted if the control plane cannot reach a node. For example: Tolerations added to daemon set pods by default tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule Daemon sets can use labels to ensure that a workload runs on a matching worker node. You can use an OpenShift Container Platform service endpoint to load balance daemon set pods. Note Daemon sets do not schedule pods after a reboot of the node if OpenShift Container Platform cannot reach the node. Static pods If you want pods restart if a node reboots, after a power loss for example, consider static pods . The kubelet on a node automatically restarts static pods as node restarts. Note Static pods cannot use secrets and config maps. Kubernetes zones Kubernetes zones can slow down the rate or, in some cases, completely stop pod evictions. When the control plane cannot reach a node, the node controller, by default, applies node.kubernetes.io/unreachable taints and evicts pods at a rate of 0.1 nodes per second. However, in a cluster that uses Kubernetes zones, pod eviction behavior is altered. If a zone is fully disrupted, where all nodes in the zone have a Ready condition that is False or Unknown , the control plane does not apply the node.kubernetes.io/unreachable taint to the nodes in that zone. For partially disrupted zones, where more than 55% of the nodes have a False or Unknown condition, the pod eviction rate is reduced to 0.01 nodes per second. Nodes in smaller clusters, with fewer than 50 nodes, are not tainted. Your cluster must have more than three zones for these behavior to take effect. You assign a node to a specific zone by applying the topology.kubernetes.io/region label in the node specification. Sample node labels for Kubernetes zones kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east KubeletConfig objects You can adjust the amount of time that the kubelet checks the state of each node. To set the interval that affects the timing of when the on-premise node controller marks nodes with the Unhealthy or Unreachable condition, create a KubeletConfig object that contains the node-status-update-frequency and node-status-report-frequency parameters. The kubelet on each node determines the node status as defined by the node-status-update-frequency setting and reports that status to the cluster based on the node-status-report-frequency setting. By default, the kubelet determines the pod status every 10 seconds and reports the status every minute. However, if the node state changes, the kubelet reports the change to the cluster immediately. OpenShift Container Platform uses the node-status-report-frequency setting only when the Node Lease feature gate is enabled, which is the default state in OpenShift Container Platform clusters. If the Node Lease feature gate is disabled, the node reports its status based on the node-status-update-frequency setting. Example kubelet config apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - "10s" node-status-report-frequency: 3 - "1m" 1 Specify the type of node type to which this KubeletConfig object applies using the label from the MachineConfig object. 2 Specify the frequency that the kubelet checks the status of a node associated with this MachineConfig object. The default value is 10s . If you change this default, the node-status-report-frequency value is changed to the same value. 3 Specify the frequency that the kubelet reports the status of a node associated with this MachineConfig object. The default value is 1m . The node-status-update-frequency parameter works with the node-monitor-grace-period parameter. The node-monitor-grace-period parameter specifies how long OpenShift Container Platform waits after a node associated with a MachineConfig object is marked Unhealthy if the controller manager does not receive the node heartbeat. Workloads on the node continue to run after this time. If the remote worker node rejoins the cluster after node-monitor-grace-period expires, pods continue to run. New pods can be scheduled to that node. The node-monitor-grace-period interval is 40s . The node-status-update-frequency value must be lower than the node-monitor-grace-period value. Note Modifying the node-monitor-grace-period parameter is not supported. Tolerations You can use pod tolerations to mitigate the effects if the on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to a node it cannot reach. A taint with the NoExecute effect affects pods that are running on the node in the following ways: Pods that do not tolerate the taint are queued for eviction. Pods that tolerate the taint without specifying a tolerationSeconds value in their toleration specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds value remain bound for the specified amount of time. After the time elapses, the pods are queued for eviction. Note Unless tolerations are explicitly set, Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , meaning that pods remain bound for 5 minutes if either of these taints is detected. You can delay or avoid pod eviction by configuring pods tolerations with the NoExecute effect for the node.kubernetes.io/unreachable and node.kubernetes.io/not-ready taints. Example toleration in a pod spec ... tolerations: - key: "node.kubernetes.io/unreachable" operator: "Exists" effect: "NoExecute" 1 - key: "node.kubernetes.io/not-ready" operator: "Exists" effect: "NoExecute" 2 tolerationSeconds: 600 3 ... 1 The NoExecute effect without tolerationSeconds lets pods remain forever if the control plane cannot reach the node. 2 The NoExecute effect with tolerationSeconds : 600 lets pods remain for 10 minutes if the control plane marks the node as Unhealthy . 3 You can specify your own tolerationSeconds value. Other types of OpenShift Container Platform objects You can use replica sets, deployments, and replication controllers. The scheduler can reschedule these pods onto other nodes after the node is disconnected for five minutes. Rescheduling onto other nodes can be beneficial for some workloads, such as REST APIs, where an administrator can guarantee a specific number of pods are running and accessible. Note When working with remote worker nodes, rescheduling pods on different nodes might not be acceptable if remote worker nodes are intended to be reserved for specific functions. stateful sets do not get restarted when there is an outage. The pods remain in the terminating state until the control plane can acknowledge that the pods are terminated. To avoid scheduling a to a node that does not have access to the same type of persistent storage, OpenShift Container Platform cannot migrate pods that require persistent volumes to other zones in the case of network separation. Additional resources For more information on Daemonesets, see DaemonSets . For more information on taints and tolerations, see Controlling pod placement using node taints . For more information on configuring KubeletConfig objects, see Creating a KubeletConfig CRD . For more information on replica sets, see ReplicaSets . For more information on deployments, see Deployments . For more information on replication controllers, see Replication controllers . For more information on the controller manager, see Kubernetes Controller Manager Operator . | [
"tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule",
"kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"",
"tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/remote-worker-nodes-on-the-network-edge |
Chapter 163. StrimziPodSet schema reference | Chapter 163. StrimziPodSet schema reference Full list of StrimziPodSet schema properties Important StrimziPodSet is an internal Streams for Apache Kafka resource. Information is provided for reference only. Do not create, modify or delete StrimziPodSet resources as this might cause errors. 163.1. StrimziPodSet schema properties Property Property type Description spec StrimziPodSetSpec The specification of the StrimziPodSet. status StrimziPodSetStatus The status of the StrimziPodSet. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-strimzipodset-reference |
10.2. Creating the RAID Devices and Mount Points | 10.2. Creating the RAID Devices and Mount Points Once you have all of your partitions created as software RAID partitions, the following steps create the RAID device and mount point: Select the RAID button on the Disk Druid main partitioning screen (refer to Figure 10.5, "RAID Options" ). Figure 10.5, "RAID Options" appears. Select Create a RAID device . Figure 10.5. RAID Options , Figure 10.6, "Making a RAID Device and Assigning a Mount Point" appears, where you can make a RAID device and assign a mount point. Figure 10.6. Making a RAID Device and Assigning a Mount Point Enter a mount point. Choose the file system type for the partition. At this point you can either configure a dynamic LVM file system or a traditional static ext2/ext3 file system. For more information on configuring LVM on a RAID device, select physical volume (LVM) and then refer to Chapter 8, LVM Configuration . If LVM is not required, continue on with the following instructions. Select a device name such as md0 for the RAID device. Choose your RAID level. You can choose from RAID 0 , RAID 1 , and RAID 5 . If you need assistance in determining which RAID level to implement, refer to Chapter 9, Redundant Array of Independent Disks (RAID) . Note If you are making a RAID partition of /boot/ , you must choose RAID level 1, and it must use one of the first two drives (IDE first, SCSI second). If you are not creating a seperate RAID partition of /boot/ , and you are making a RAID partition for the root file system ( / ), it must be RAID level 1 and must use one of the first two drives (IDE first, SCSI second). Figure 10.7. The /boot/ Mount Error The RAID partitions created appear in the RAID Members list. Select which of these partitions should be used to create the RAID device. If configuring RAID 1 or RAID 5, specify the number of spare partitions. If a software RAID partition fails, the spare is automatically used as a replacement. For each spare you want to specify, you must create an additional software RAID partition (in addition to the partitions for the RAID device). Select the partitions for the RAID device and the partition(s) for the spare(s). After clicking OK , the RAID device appears in the Drive Summary list. Repeat this chapter's entire process for configuring additional partitions, devices, and mount points, such as the root partition ( / ), /home/ , or swap. After completing the entire configuration, the figure as shown in Figure 10.8, "Final Sample RAID Configuration" resembles the default configuration, except for the use of RAID. Figure 10.8. Final Sample RAID Configuration The figure as shown in Figure 10.9, "Final Sample RAID With LVM Configuration" is an example of a RAID and LVM configuration. Figure 10.9. Final Sample RAID With LVM Configuration You can continue with your installation process. Refer to the Installation Guide for further instructions. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/software_raid_configuration-creating_the_raid_devices_and_mount_points |
Chapter 2. Managing DNS zones in IdM | Chapter 2. Managing DNS zones in IdM As Identity Management (IdM) administrator, you can manage how IdM DNS zones work. The chapter describes the following topics and procedures: What DNS zone types are supported in IdM How to add primary IdM DNS zones using the IdM Web UI How to add primary IdM DNS zones using the IdM CLI How to remove primary IdM DNS zones using the IdM Web UI How to remove primary IdM DNS zones using the IdM CLI What DNS attributes you can configure in IdM How you can configure these attributes in the IdM Web UI How you can configure these attributes in the IdM CLI How zone transfers work in IdM How you can allow zone transfers in the IdM Web UI How you can allow zone transfers in the IdM CLI Prerequisites DNS service is installed on the IdM server. For more information about how to install an IdM server with integrated DNS, see one of the following links: Installing an IdM server: With integrated DNS, with an integrated CA as the root CA Installing an IdM server: With integrated DNS, with an external CA as the root CA Installing an IdM server: With integrated DNS, without a CA 2.1. Supported DNS zone types Identity Management (IdM) supports two types of DNS zones: primary and forward zones. These two types of zones are described here, including an example scenario for DNS forwarding. Note This guide uses the BIND terminology for zone types which is different from the terminology used for Microsoft Windows DNS. Primary zones in BIND serve the same purpose as forward lookup zones and reverse lookup zones in Microsoft Windows DNS. Forward zones in BIND serve the same purpose as conditional forwarders in Microsoft Windows DNS. Primary DNS zones Primary DNS zones contain authoritative DNS data and can accept dynamic DNS updates. This behavior is equivalent to the type master setting in standard BIND configuration. You can manage primary zones using the ipa dnszone-* commands. In compliance with standard DNS rules, every primary zone must contain start of authority (SOA) and nameserver (NS) records. IdM generates these records automatically when the DNS zone is created, but you must copy the NS records manually to the parent zone to create proper delegation. In accordance with standard BIND behavior, queries for names for which the server is not authoritative are forwarded to other DNS servers. These DNS servers, so called forwarders, may or may not be authoritative for the query. Example 2.1. Example scenario for DNS forwarding The IdM server contains the test.example. primary zone. This zone contains an NS delegation record for the sub.test.example. name. In addition, the test.example. zone is configured with the 192.0.2.254 forwarder IP address for the sub.text.example subzone. A client querying the name nonexistent.test.example. receives the NXDomain answer, and no forwarding occurs because the IdM server is authoritative for this name. On the other hand, querying for the host1.sub.test.example. name is forwarded to the configured forwarder 192.0.2.254 because the IdM server is not authoritative for this name. Forward DNS zones From the perspective of IdM, forward DNS zones do not contain any authoritative data. In fact, a forward "zone" usually only contains two pieces of information: A domain name The IP address of a DNS server associated with the domain All queries for names belonging to the domain defined are forwarded to the specified IP address. This behavior is equivalent to the type forward setting in standard BIND configuration. You can manage forward zones using the ipa dnsforwardzone-* commands. Forward DNS zones are especially useful in the context of IdM-Active Directory (AD) trusts. If the IdM DNS server is authoritative for the idm.example.com zone and the AD DNS server is authoritative for the ad.example.com zone, then ad.example.com is a DNS forward zone for the idm.example.com primary zone. That means that when a query comes from an IdM client for the IP address of somehost.ad.example.com , the query is forwarded to an AD domain controller specified in the ad.example.com IdM DNS forward zone. 2.2. Adding a primary DNS zone in IdM Web UI Follow this procedure to add a primary DNS zone using the Identity Management (IdM) Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Figure 2.1. Managing IdM DNS primary zones Click Add at the top of the list of all zones. Provide the zone name. Figure 2.2. Entering an new IdM primary zone Click Add . 2.3. Adding a primary DNS zone in IdM CLI Follow this procedure to add a primary DNS zone using the Identity Management (IdM) command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure The ipa dnszone-add command adds a new zone to the DNS domain. Adding a new zone requires you to specify the name of the new subdomain. You can pass the subdomain name directly with the command: If you do not pass the name to ipa dnszone-add , the script prompts for it automatically. Additional resources See ipa dnszone-add --help . 2.4. Removing a primary DNS zone in IdM Web UI Follow this procedure to remove a primary DNS zone from Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Select the check box by the zone name and click Delete . Figure 2.3. Removing a primary DNS Zone In the Remove DNS zones dialog window, confirm that you want to delete the selected zone. 2.5. Removing a primary DNS zone in IdM CLI Follow this procedure to remove a primary DNS zone from Identity Management (IdM) using the IdM command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure To remove a primary DNS zone, enter the ipa dnszone-del command, followed by the name of the zone you want to remove. For example: 2.6. DNS configuration priorities You can configure many DNS configuration options on the following levels. Each level has a different priority. Zone-specific configuration The level of configuration specific for a particular zone defined in IdM has the highest priority. You can manage zone-specific configuration by using the ipa dnszone-* and ipa dnsforwardzone-* commands. Per-server configuration You are asked to define per-server forwarders during the installation of an IdM server. You can manage per-server forwarders by using the ipa dnsserver-* commands. If you do not want to set a per-server forwarder when installing a replica, you can use the --no-forwarder option. Global DNS configuration If no zone-specific configuration is defined, IdM uses global DNS configuration stored in LDAP. You can manage global DNS configuration using the ipa dnsconfig-* commands. Settings defined in global DNS configuration are applied to all IdM DNS servers. Configuration in /etc/named.conf Configuration defined in the /etc/named.conf file on each IdM DNS server has the lowest priority. It is specific for each server and must be edited manually. The /etc/named.conf file is usually only used to specify DNS forwarding to a local DNS cache. Other options are managed using the commands for zone-specific and global DNS configuration mentioned above. You can configure DNS options on multiple levels at the same time. In such cases, configuration with the highest priority takes precedence over configuration defined at lower levels. Additional resources The Priority order of configuration section in Per Server Config in LDAP 2.7. Configuration attributes of primary IdM DNS zones Identity Management (IdM) creates a new zone with certain default configuration, such as the refresh periods, transfer settings, or cache settings. In IdM DNS zone attributes , you can find the attributes of the default zone configuration that you can modify using one of the following options: The dnszone-mod command in the command-line interface (CLI). For more information, see Editing the configuration of a primary DNS zone in IdM CLI . The IdM Web UI. For more information, see Editing the configuration of a primary DNS zone in IdM Web UI . An Ansible playbook that uses the ipadnszone module. For more information, see Managing DNS zones in IdM . Along with setting the actual information for the zone, the settings define how the DNS server handles the start of authority (SOA) record entries and how it updates its records from the DNS name server. Table 2.1. IdM DNS zone attributes Attribute Command-Line Option Description Authoritative name server --name-server Sets the domain name of the primary DNS name server, also known as SOA MNAME. By default, each IdM server advertises itself in the SOA MNAME field. Consequently, the value stored in LDAP using --name-server is ignored. Administrator e-mail address --admin-email Sets the email address to use for the zone administrator. This defaults to the root account on the host. SOA serial --serial Sets a serial number in the SOA record. Note that IdM sets the version number automatically and users are not expected to modify it. SOA refresh --refresh Sets the interval, in seconds, for a secondary DNS server to wait before requesting updates from the primary DNS server. SOA retry --retry Sets the time, in seconds, to wait before retrying a failed refresh operation. SOA expire --expire Sets the time, in seconds, that a secondary DNS server will try to perform a refresh update before ending the operation attempt. SOA minimum --minimum Sets the time to live (TTL) value in seconds for negative caching according to RFC 2308 . SOA time to live --ttl Sets TTL in seconds for records at zone apex. In zone example.com , for example, all records (A, NS, or SOA) under name example.com are configured, but no other domain names, like test.example.com , are affected. Default time to live --default-ttl Sets the default time to live (TTL) value in seconds for negative caching for all values in a zone that never had an individual TTL value set before. Requires a restart of the named-pkcs11 service on all IdM DNS servers after changes to take effect. BIND update policy --update-policy Sets the permissions allowed to clients in the DNS zone. Dynamic update --dynamic-update =TRUE|FALSE Enables dynamic updates to DNS records for clients. Note that if this is set to false, IdM client machines will not be able to add or update their IP address. Allow transfer --allow-transfer = string Gives a list of IP addresses or network names which are allowed to transfer the given zone, separated by semicolons (;). Zone transfers are disabled by default. The default --allow-transfer value is none . Allow query --allow-query Gives a list of IP addresses or network names which are allowed to issue DNS queries, separated by semicolons (;). Allow PTR sync --allow-sync-ptr =1|0 Sets whether A or AAAA records (forward records) for the zone will be automatically synchronized with the PTR (reverse) records. Zone forwarders --forwarder = IP_address Specifies a forwarder specifically configured for the DNS zone. This is separate from any global forwarders used in the IdM domain. To specify multiple forwarders, use the option multiple times. Forward policy --forward-policy =none|only|first Specifies the forward policy. For information about the supported policies, see DNS forward policies in IdM . 2.8. Editing the configuration of a primary DNS zone in IdM Web UI Follow this procedure to edit the configuration attributes of a primary Identity Management (IdM) DNS using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Figure 2.4. DNS primary zones management In the DNS Zones section, click on the zone name in the list of all zones to open the DNS zone page. Figure 2.5. Editing a primary zone Click Settings . Figure 2.6. The Settings tab in the primary zone edit page Change the zone configuration as required. For information about the available settings, see IdM DNS zone attributes . Click Save to confirm the new configuration. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. 2.9. Editing the configuration of a primary DNS zone in IdM CLI Follow this procedure to edit the configuration of a primary DNS zone using the Identity Management (IdM) command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure To modify an existing primary DNS zone, use the ipa dnszone-mod command. For example, to set the time to wait before retrying a failed refresh operation to 1800 seconds: For more information about the available settings and their corresponding CLI options, see IdM DNS zone attributes . If a specific setting does not have a value in the DNS zone entry you are modifying, the ipa dnszone-mod command adds the value. If the setting does not have a value, the command overwrites the current value with the specified value. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. Additional resources See ipa dnszone-mod --help . 2.10. Zone transfers in IdM In an Identity Management (IdM) deployment that has integrated DNS, you can use zone transfers to copy all resource records from one name server to another. Name servers maintain authoritative data for their zones. If you make changes to the zone on a DNS server that is authoritative for zone A DNS zone, you must distribute the changes among the other name servers in the IdM DNS domain that are outside zone A . Important The IdM-integrated DNS can be written to by different servers simultaneously. The Start of Authority (SOA) serial numbers in IdM zones are not synchronized among the individual IdM DNS servers. For this reason, configure your DNS servers outside the to-be-transferred zone to only use one specific DNS server inside the to-be-transferred zone. This prevents zone transfer failures caused by non-synchronized SOA serial numbers. IdM supports zone transfers according to the RFC 5936 (AXFR) and RFC 1995 (IXFR) standards. Additional resources Enabling zone transfers in IdM Web UI Enabling zone transfers in IdM CLI 2.11. Enabling zone transfers in IdM Web UI Follow this procedure to enable zone transfers in Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click Settings . Under Allow transfer , specify the name servers to which you want to transfer the zone records. Figure 2.7. Enabling zone transfers Click Save at the top of the DNS zone page to confirm the new configuration. 2.12. Enabling zone transfers in IdM CLI Follow this procedure to enable zone transfers in Identity Management (IdM) using the IdM command-line interface (CLI). Prerequisites You are logged in as IdM administrator. You have root access to the secondary DNS servers. Procedure To enable zone transfers in the BIND service, enter the ipa dnszone-mod command, and specify the list of name servers that are outside the to-be-transferred zone to which the zone records will be transferred using the --allow-transfer option. For example: Verification SSH to one of the DNS servers to which zone transfer has been enabled: Transfer the IdM DNS zone using a tool such as the dig utility: If the command returns no error, you have successfully enabled zone transfer for zone_name . 2.13. Additional resources See Using Ansible playbooks to manage IdM DNS zones . | [
"ipa dnszone-add newzone.idm.example.com",
"ipa dnszone-del idm.example.com",
"ipa dnszone-mod --retry 1800",
"ipa dnszone-mod --allow-transfer=192.0.2.1;198.51.100.1;203.0.113.1 idm.example.com",
"ssh 192.0.2.1",
"dig @ipa-server zone_name AXFR"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_dns_in_identity_management/managing-dns-zones-in-idm_working-with-dns-in-identity-management |
2.5. Repeating the Site Survey | 2.5. Repeating the Site Survey There may need to be more than one site survey, particularly if an enterprise has offices in multiple cities or countries. The informational needs might be so complex that several different organizations have to keep information at their local offices rather than at a single, centralized site. In this case, each office that keeps a main copy of information should perform its own site survey. After the site survey process has been completed, the results of each survey should be returned to a central team (probably consisting of representatives from each office) for use in the design of the enterprise-wide data schema model and directory tree. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/performing_a_site_survey-repeating_the_site_survey |
function::task_egid | function::task_egid Name function::task_egid - The effective group identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the effective group id of the given task. | [
"task_egid:long(task:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-egid |
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices | Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . As of OpenShift Data Foundation version 4.12, you can choose gp2-csi or gp3-csi as the storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) ceph-csi-operator ceph-csi-controller-manager-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_amazon_web_services/deploy-using-dynamic-storage-devices-aws |
Chapter 5. User [user.openshift.io/v1] | Chapter 5. User [user.openshift.io/v1] Description Upon log in, every user of the system receives a User and Identity resource. Administrators may directly manipulate the attributes of the users for their own tracking, or set groups via the API. The user name is unique and is chosen based on the value provided by the identity provider - if a user already exists with the incoming name, the user name may have a number appended to it depending on the configuration of the system. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required groups 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources fullName string FullName is the full name of user groups array (string) Groups specifies group names this user is a member of. This field is deprecated and will be removed in a future release. Instead, create a Group object containing the name of this User. identities array (string) Identities are the identities associated with this user kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 5.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/users DELETE : delete collection of User GET : list or watch objects of kind User POST : create an User /apis/user.openshift.io/v1/watch/users GET : watch individual changes to a list of User. deprecated: use the 'watch' parameter with a list operation instead. /apis/user.openshift.io/v1/users/{name} DELETE : delete an User GET : read the specified User PATCH : partially update the specified User PUT : replace the specified User /apis/user.openshift.io/v1/watch/users/{name} GET : watch changes to an object of kind User. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/user.openshift.io/v1/users HTTP method DELETE Description delete collection of User Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind User Table 5.3. HTTP responses HTTP code Reponse body 200 - OK UserList schema 401 - Unauthorized Empty HTTP method POST Description create an User Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body User schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 202 - Accepted User schema 401 - Unauthorized Empty 5.2.2. /apis/user.openshift.io/v1/watch/users HTTP method GET Description watch individual changes to a list of User. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/user.openshift.io/v1/users/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the User HTTP method DELETE Description delete an User Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified User Table 5.11. HTTP responses HTTP code Reponse body 200 - OK User schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified User Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified User Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body User schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 401 - Unauthorized Empty 5.2.4. /apis/user.openshift.io/v1/watch/users/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the User HTTP method GET Description watch changes to an object of kind User. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/user_and_group_apis/user-user-openshift-io-v1 |
Chapter 13. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | Chapter 13. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 13.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 13.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 13.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 13.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 13.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 13.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 13.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 13.3.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 13.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 13.3.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 13.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 13.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 13.4. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 13.4.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 13.4.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 13.4.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 13.3. Required EC2 permissions for installation ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:AttachNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 13.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing VPC, your account does not require these permissions for creating network resources. Example 13.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:SetLoadBalancerPoliciesOfListener Example 13.6. Required Elastic Load Balancing permissions (ELBv2) for installation elasticloadbalancing:AddTags elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterTargets Example 13.7. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 13.8. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 13.9. Required S3 permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketPolicy s3:GetBucketObjectLockConfiguration s3:GetBucketReplication s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 13.10. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 13.11. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeletePlacementGroup ec2:DeleteNetworkInterface ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 13.12. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 13.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 13.14. Additional IAM and S3 permissions that are required to create manifests iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutLifecycleConfiguration s3:ListBucket s3:ListBucketMultipartUploads s3:AbortMultipartUpload Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 13.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas 13.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the worker0.type.properties.ImageID parameter with this value. 13.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 13.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 13.8. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 13.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 13.8.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 13.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 13.8.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If you manually created a cloud identity and access management (IAM) role, locate any CredentialsRequest objects with the TechPreviewNoUpgrade annotation in the release image by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name> Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. Delete all CredentialsRequest objects that have the TechPreviewNoUpgrade annotation. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 13.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 13.10. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 13.10.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 13.16. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.11. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 13.11.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 13.17. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 13.12. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 13.12.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 13.18. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.13. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 13.14. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 13.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-073850a7021953a5c ap-east-1 ami-0f8800a05c09be42d ap-northeast-1 ami-0a226dbcc9a561c40 ap-northeast-2 ami-041ae0537e2eddec1 ap-northeast-3 ami-0bb8d9b69dc5b7670 ap-south-1 ami-0e9c18058fc5f94fd ap-southeast-1 ami-03022d358ba2168be ap-southeast-2 ami-09ffdc5be9b973be0 ap-southeast-3 ami-0facf1a0edeb20314 ca-central-1 ami-028cea206c2d03317 eu-central-1 ami-002eb441f329ccb0f eu-north-1 ami-0b1a1fb68b3b9fee7 eu-south-1 ami-0bd0fd41a1d3f799a eu-west-1 ami-04504e8799057980c eu-west-2 ami-0cc9297ddb3bce971 eu-west-3 ami-06f98f607a50937c6 me-south-1 ami-0fe39da7871a5b2a5 sa-east-1 ami-08265cc3226697767 us-east-1 ami-0fe05b1aa8dacfa90 us-east-2 ami-0ff64f495c7e977cf us-gov-east-1 ami-0c99658076c41872a us-gov-west-1 ami-0ca4acd5b8ba1cb1d us-west-1 ami-01dc5d8e6bb6f23f4 us-west-2 ami-0404a109adfd00019 Table 13.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0574bcc5f80b0ad9a ap-east-1 ami-0a65e79822ae2d235 ap-northeast-1 ami-0f7ef19d48e22353b ap-northeast-2 ami-051dc6de359975e3c ap-northeast-3 ami-0fd0b4222595650ac ap-south-1 ami-05f9d14fe4a90ed6f ap-southeast-1 ami-0afdb9133d22fba5f ap-southeast-2 ami-0ef979abe82d07d44 ap-southeast-3 ami-025f9103ac4310e7f ca-central-1 ami-0588cdf59e5c14847 eu-central-1 ami-0ef24c0e18f93fa42 eu-north-1 ami-0439e2a3bf315df1a eu-south-1 ami-0714e7c2e0106cdd3 eu-west-1 ami-0b960e76764ccd0c3 eu-west-2 ami-02621f50de62b3b89 eu-west-3 ami-0933ce7f5e2bfb50e me-south-1 ami-074bde61a2ab740ee sa-east-1 ami-03b4f97cfc8033ae0 us-east-1 ami-02a574449d4f4d280 us-east-2 ami-020e5600ef28c60ae us-gov-east-1 ami-069f60e1dcf766d24 us-gov-west-1 ami-0db3cda4dbaccda02 us-west-1 ami-0c90cabeb5dee3178 us-west-2 ami-0f96437a23aeae53f 13.14.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 13.14.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.12.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 13.15. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 13.15.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 13.19. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 13.16. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64. and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 13.16.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 13.20. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.17. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64. and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 13.17.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 13.21. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.18. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 13.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 13.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 13.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 13.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 13.22.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 13.22.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 13.22.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 13.23. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 13.24. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 13.25. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 13.26. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 13.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 13.28. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 13.29. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-user-infra |
7.3. Diagnosing and Correcting Problems in a Cluster | 7.3. Diagnosing and Correcting Problems in a Cluster For information about diagnosing and correcting problems in a cluster, see Chapter 10, Diagnosing and Correcting Problems in a Cluster . There are a few simple checks that you can perform with the ccs command, however. To verify that all of the nodes specified in the host's cluster configuration file have identical cluster configuration files, execute the following command: If you have created or edited a configuration file on a local node, you can verify that all of the nodes specified in the local file have identical cluster configuration files with the following command: | [
"ccs -h host --checkconf",
"ccs -f file --checkconf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-problems-ccs-ca |
Chapter 2. Deploy using dynamic storage devices | Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by VMware vSphere (disk format: thin) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Note Both internal and external OpenShift Data Foundation clusters are supported on VMware vSphere. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . For VMs on VMware, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab . For more information, see Installing on vSphere . Optional: If you want to use thick-provisioned storage for flexibility, you must create a storage class with zeroedthick or eagerzeroedthick disk format. For information, see VMware vSphere object definition . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to thin . If you have created a storage class with zeroedthick or eagerzeroedthick disk format for thick-provisioned storage, then that storage class is listed in addition to the default, thin storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Spread the worker nodes across three different physical nodes, racks, or failure domains for high availability. Use vCenter anti-affinity to align OpenShift Data Foundation rack labels with physical nodes and racks in the data center to avoid scheduling two worker nodes on the same physical chassis. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of the aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select the Taint nodes checkbox to make selected nodes dedicated for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-dynamic-storage-devices-vmware |
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service | Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Red Hat Developer Hub 1.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/index |
Chapter 4. RHEL System Roles for SAP | Chapter 4. RHEL System Roles for SAP The RHEL System Roles for SAP provide a quick, easy, and consistent method for preparing your local system or any number of remote systems according to applicable SAP notes for SAP software. They include the Ansible roles sap_general_preconfigure , sap_netweaver_preconfigure , and sap_hana_preconfigure and require an Ansible execution system (e.g., Ansible Automation Platform, Ansible Core). 4.1. Installing Ansible Core Ansible Core is available in the RHEL 9 AppStream repository. If you already have an Ansible Automation Platform or Ansible Core package installed, you can skip this step and proceed to Installing RHEL System Roles for SAP . Prerequisites You have system administrator access. Procedure Install the ansible-core package: 4.2. Installing RHEL System Roles for SAP RHEL System Roles for SAP is available in the RHEL for SAP Solutions r epository. It requires certain functionality delivered in the RHEL System Roles, which are available in the AppStream repository. Prerequisites You have system administrator access. You have installed the Ansible Core package or Ansible Automation Platform. Procedure Install RHEL System Roles for SAP and RHEL System Roles: 4.3. System configuration with RHEL System Roles for SAP 4.3.1. Preparing a local system If the Ansible Engine is installed on the same system on which you want to install the SAP software, perform the steps outlined in this procedure to configure your local managed node. Prerequisites You have system administrator access. Procedure Make a backup of the system if you would like to preserve the original configuration of the server. Create a file named sap.yml with the following content: --- - hosts: localhost vars: ansible_connection: local sap_general_preconfigure_max_hostname_length: 64 sap_general_preconfigure_reboot_ok: false sap_general_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_reboot_ok: false sap_hana_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_update: true roles: - sap_general_preconfigure - sap_netweaver_preconfigure - sap_hana_preconfigure Important The correct indentation and the use of spaces instead of tabs is essential for YAML files. Note The line sap_general_preconfigure_max_hostname_length: 64 is only required if your hostname ( hostname -s ) is longer than 13 characters and if you are not using this system for an SAP ABAP Platform instance. Without this line, the role sap_general_preconfigure will fail its hostname check because a hostname with more than 13 characters is not allowed for an SAP ABAP Platform instance as per SAP note 611361 . The line sap_netweaver_preconfigure is used to perform specific installation and configuration steps for an SAP ABAP Platform. It can be removed or commented out for an SAP HANA database only system. The line sap_hana_preconfigure is used to perform specific installation and configuration steps for an SAP HANA database. It can be removed or commented out for an SAP ABAP Platform only system. Run the sap.yml Ansible playbook: This will configure this system according to the applicable SAP notes for SAP ABAP Platform and/or SAP HANA on RHEL 9. After the ansible-playbook command has finished successfully, reboot the system: 4.3.2. Preparing one or more remote systems If the Ansible Engine is installed on the same system on which you want to install the SAP software, perform the steps outlined in this procedure to configure your local managed node. Prerequisites You have system administrator access. Procedure Make a backup of the remote systems if you would like to preserve the original configuration of the server. Create an inventory file or modify file /etc/ansible/hosts to contain the name of a group of hosts and each system which you intend to configure (=managed node) in a separate line (example for three hosts in a host group named sap_hosts ): [sap_hosts] host01 host02 host03 Verify that you can log in to all three hosts using ssh without a password. Example: Create a YAML file named sap.yml with the following content: --- - hosts: sap_hosts vars: sap_general_preconfigure_max_hostname_length: 64 sap_general_preconfigure_reboot_ok: false sap_general_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_reboot_ok: true sap_hana_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_update: true roles: - sap_general_preconfigure - sap_netweaver_preconfigure - sap_hana_preconfigure Note The line sap_general_preconfigure_max_hostname_length: 64 is only required if your hostname ( hostname -s ) is longer than 13 characters and if you are not using this system for an SAP ABAP Platform instance. Without this line, the role sap_general_preconfigure will fail its hostname check because a hostname with more than 13 characters is not allowed for an SAP ABAP Platform instance as per SAP note 611361 . The line sap_netweaver_preconfigure is used to perform specific installation and configuration steps for an SAP ABAP Platform. It can be removed or commented out for an SAP HANA database only system. The line sap_hana_preconfigure is used to perform specific installation and configuration steps for an SAP HANA database. It can be removed or commented out for an SAP ABAP Platform only system. Run the sap.yml Ansible playbook: This will configure all systems that are part of host group sap_hosts according to the applicable SAP notes for SAP ABAP Platform and/or SAP HANA on RHEL 9. Finally, if necessary, the systems are rebooted. Additional resources RHEL System Roles for SAP | [
"dnf install ansible-core",
"dnf install rhel-system-roles-sap rhel-system-roles",
"--- - hosts: localhost vars: ansible_connection: local sap_general_preconfigure_max_hostname_length: 64 sap_general_preconfigure_reboot_ok: false sap_general_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_reboot_ok: false sap_hana_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_update: true roles: - sap_general_preconfigure - sap_netweaver_preconfigure - sap_hana_preconfigure",
"ansible-playbook sap.yml -e 'ansible_python_interpreter=/usr/libexec/platform-python'",
"reboot",
"[sap_hosts] host01 host02 host03",
"ssh host01 uname -a ssh host02 hostname ssh host03 echo test",
"--- - hosts: sap_hosts vars: sap_general_preconfigure_max_hostname_length: 64 sap_general_preconfigure_reboot_ok: false sap_general_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_reboot_ok: true sap_hana_preconfigure_fail_if_reboot_required: false sap_hana_preconfigure_update: true roles: - sap_general_preconfigure - sap_netweaver_preconfigure - sap_hana_preconfigure",
"ansible-playbook sap.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/installing_rhel_9_for_sap_solutions/assembly_rhel-system-roles-for-sap_configuring-rhel-9-for-sap-hana2-installation |
3. Common Usage | 3. Common Usage For clarity, this document includes cURL examples for all use cases and includes examples using other frameworks as a courtesy from Red Hat. cURL most clearly illustrates the nature of interacting with RESTful resources. For more information, see the man page for cURL by using the command man curl . | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/customer_portal_integration_guide/common_usage |
Chapter 43. Role-based access control for branches in Business Central | Chapter 43. Role-based access control for branches in Business Central Business Central provides the option for users to restrict the access for a target branch for a specific collaborator type. The security check uses both the Security Management screen and contributors sources to grant or deny permissions to spaces and projects. For example, if a user has the security permission to update a project and has write permission on that branch, based on the contributor type, then they are able to create new assets. 43.1. Customizing role-based branch access You can customize contributor role permissions for each branch of a project in Business Central. For example, you can set Read , Write , Delete , and Deploy access for each role assigned to a branch. Procedure In Business Central, go to Menu Design Projects . If needed, add a new contributor: Click the project name and then click the Contributors tab. Click Add Contributor . Enter user name in the text field. Select the Contributor role type from the drop-down list. Click Ok . Customize role-based branch access for the relevant contributor: Click Settings Branch Management . Select the branch name from the drop-down list. In the Role Access section, select or deselect the permissions check boxes to specify role-based branch access for each available role type. Click Save and click Save again to confirm your changes. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/role-based-access |
Chapter 21. Integrating with NIS Domains and Netgroups | Chapter 21. Integrating with NIS Domains and Netgroups 21.1. About NIS and Identity Management In UNIX environments, the network information service (NIS) is a common way to centrally manage identities and authentication. NIS, which was originally named Yellow Pages (YP), centrally manages authentication and identity information such as: Users and passwords Host names and IP addresses POSIX groups. For modern network infrastructures, NIS is considered too insecure because, for example, it neither provides host authentication, nor is data sent encrypted over the network. To work around the problems, NIS is often integrated with other protocols to enhance security. If you use Identity Management (IdM), you can use the NIS server plug-in to connect clients that cannot be fully migrated to IdM. IdM integrates netgroups and other NIS data into the IdM domain. Additionally, you can easily migrate user and host identities from a NIS domain to IdM. NIS in Identity Management NIS objects are integrated and stored in the Directory Server back end in compliance with RFC 2307 . IdM creates NIS objects in the LDAP directory and clients retrieve them through, for example, System Security Services Daemon (SSSD) or nss_ldap using an encrypted LDAP connection. IdM manages netgroups, accounts, groups, hosts, and other data. IdM uses a NIS listener to map passwords, groups, and netgroups to IdM entries. NIS Plug-ins in Identity Management For NIS support, IdM uses the following plug-ins provided in the slapi-nis package: NIS Server Plug-in The NIS Server plug-in enables the IdM-integrated LDAP server to act as a NIS server for clients. In this role, Directory Server dynamically generates and updates NIS maps according to the configuration. Using the plug-in, IdM serves clients using the NIS protocol as an NIS server. For further details, see Section 21.2, "Enabling NIS in Identity Management" . Schema Compatibility Plug-in The Schema Compatibility plug-in enables the Directory Server back end to provide an alternate view of entries stored in part of the directory information tree (DIT). This includes adding, dropping, or renaming attribute values, and optionally retrieving values for attributes from multiple entries in the tree. For further details, see the /usr/share/doc/slapi-nis- version /sch-getting-started.txt file. 21.1.1. NIS Netgroups in Identity Management NIS entities can be stored in netgroups. Compared to UNIX groups, netgroups provide support for: Nested groups (groups as members of other groups). Grouping hosts. A netgroup defines a set of the following information: host, user, and domain. This set is called a triple . These three fields can contain: A value. A dash ( - ), which specifies "no valid value" No value. An empty field specifies a wildcard. When a client requests a NIS netgroup, IdM translates the LDAP entry : to a traditional NIS map and sends it to the client over the NIS protocol by using the NIS plug-in. to an LDAP format that is compliant with RFC 2307 or RFC 2307bis. 21.1.1.1. Displaying NIS Netgroup Entries IdM stores users and groups in the memberUser attribute, and hosts and host groups in memberHost . The following example shows a netgroup entry in Directory Server component of IdM: Example 21.1. A NIS Entry in Directory Server In IdM, you can manage netgroup entries using the ipa netgroup-* commands. For example, to display a netgroup entry: Example 21.2. Displaying a Netgroup Entry | [
"( host.example.com ,, nisdomain.example.com ) (-, user , nisdomain.example.com )",
"dn: ipaUniqueID=d4453480-cc53-11dd-ad8b-0800200c9a66,cn=ng,cn=alt, cn: netgroup1 memberHost: fqdn=host1.example.com,cn=computers,cn=accounts, memberHost: cn=VirtGuests,cn=hostgroups,cn=accounts, memberUser: cn=demo,cn=users,cn=accounts, memberUser: cn=Engineering,cn=groups,cn=accounts, nisDomainName: nisdomain.example.com",
"ipa netgroup-show netgroup1 Netgroup name: netgroup1 Description: my netgroup NIS domain name: nisdomain.example.com Member Host: VirtGuests Member Host: host1.example.com Member User: demo Member User: Engineering"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/nis |
Chapter 6. Configuring the discovery image | Chapter 6. Configuring the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image. Note Modifications to the discovery image will not persist in the system. 6.1. Creating an Ignition configuration file Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host's root filesystem. Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot. Important Ignition versions newer than 3.2 are not supported, and will raise an error. Procedure Create an Ignition file and specify the configuration specification version: USD vim ~/ignition.conf { "ignition": { "version": "3.1.0" } } Add configuration data to the Ignition file. For example, add a password to the core user. Generate a password hash: USD openssl passwd -6 Add the generated password hash to the core user: { "ignition": { "version": "3.1.0" }, "passwd": { "users": [ { "name": "core", "passwordHash": "USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1" } ] } } Save the Ignition file and export it to the IGNITION_FILE variable: USD export IGNITION_FILE=~/ignition.conf 6.2. Modifying the discovery image with Ignition Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API. Prerequisites If you used the web console to create the cluster, you have set up the API authentication. You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable. You have a valid Ignition file and have exported the file name as USDIGNITION_FILE . Procedure Create an ignition_config_override JSON object and redirect it to a file: USD jq -n \ --arg IGNITION "USD(jq -c . USDIGNITION_FILE)" \ '{ignition_config_override: USDIGNITION}' \ > discovery_ignition.json Refresh the API token: USD source refresh-token Patch the infrastructure environment: USD curl \ --header "Authorization: Bearer USDAPI_TOKEN" \ --header "Content-Type: application/json" \ -XPATCH \ -d @discovery_ignition.json \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq The ignition_config_override object references the Ignition file. Download the updated discovery image. | [
"vim ~/ignition.conf",
"{ \"ignition\": { \"version\": \"3.1.0\" } }",
"openssl passwd -6",
"{ \"ignition\": { \"version\": \"3.1.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"passwordHash\": \"USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1\" } ] } }",
"export IGNITION_FILE=~/ignition.conf",
"jq -n --arg IGNITION \"USD(jq -c . USDIGNITION_FILE)\" '{ignition_config_override: USDIGNITION}' > discovery_ignition.json",
"source refresh-token",
"curl --header \"Authorization: Bearer USDAPI_TOKEN\" --header \"Content-Type: application/json\" -XPATCH -d @discovery_ignition.json https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_configuring-the-discovery-image |
3.12. RFKill | 3.12. RFKill Many computer systems contain radio transmitters, including Wi-Fi, Bluetooth, and 3G devices. These devices consume power, which is wasted when the device is not in use. RFKill is a subsystem in the Linux kernel that provides an interface through which radio transmitters in a computer system can be queried, activated, and deactivated. When transmitters are deactivated, they can be placed in a state where software can reactive them (a soft block ) or where software cannot reactive them (a hard block ). The RFKill core provides the application programming interface (API) for the subsystem. Kernel drivers that have been designed to support RFkill use this API to register with the kernel, and include methods for enabling and disabling the device. Additionally, the RFKill core provides notifications that user applications can interpret and ways for user applications to query transmitter states. The RFKill interface is located at /dev/rfkill , which contains the current state of all radio transmitters on the system. Each device has its current RFKill state registered in sysfs . Additionally, RFKill issues uevents for each change of state in an RFKill-enabled device. Rfkill is a command-line tool with which you can query and change RFKill-enabled devices on the system. To obtain the tool, install the rfkill package. Use the command rfkill list to obtain a list of devices, each of which has an index number associated with it, starting at 0 . You can use this index number to tell rfkill to block or unblock a device, for example: blocks the first RFKill-enabled device on the system. You can also use rfkill to block certain categories of devices, or all RFKill-enabled devices. For example: blocks all Wi-Fi devices on the system. To block all RFKill-enabled devices, run: To unblock devices, run rfkill unblock instead of rfkill block . To obtain a full list of device categories that rfkill can block, run rfkill help | [
"~]# rfkill block 0",
"~]# rfkill block wifi",
"~]# rfkill block all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/rfkill |
Chapter 3. Completing the Service Telemetry Framework configuration | Chapter 3. Completing the Service Telemetry Framework configuration 3.1. Connecting Red Hat OpenStack Platform to Service Telemetry Framework To collect metrics, events, or both, and to send them to the Service Telemetry Framework (STF) storage domain, you must configure the Red Hat OpenStack Platform overcloud to enable data collection and transport. To deploy data collection and transport to STF on Red Hat OpenStack Platform cloud nodes that employ routed L3 domains, such as distributed compute node (DCN) or spine-leaf, see Section 3.2, "Deploying to non-standard network topologies" . 3.2. Deploying to non-standard network topologies If your nodes are on a separate network from the default InternalApi network, you must make configuration adjustments so that AMQ Interconnect can transport data to the Service Telemetry Framework (STF) server instance. This scenario is typical in a spine-leaf or a DCN topology. For more information about DCN configuration, see the Spine Leaf Networking guide. If you use STF with Red Hat OpenStack Platform 16.0 and plan to monitor your Ceph, Block, or Object storage nodes, you must make configuration changes that are similar to the configuration changes that you make to the spine-leaf and DCN network configuration. To monitor Ceph nodes, use the CephStorageExtraConfig parameter to define which network interface to load into the AMQ Interconnect and collectd configuration files. CephStorageExtraConfig: tripleo::profile::base::metrics::collectd::amqp_host: "%{hiera('storage')}" tripleo::profile::base::metrics::qdr::listener_addr: "%{hiera('storage')}" tripleo::profile::base::ceilometer::agent::notification::notifier_host_addr: "%{hiera('storage')}" Similarly, you must specify BlockStorageExtraConfig and ObjectStorageExtraConfig parameters if your environment uses Block and Object storage roles. The deployment of a spine-leaf topology involves creating roles and networks, then assigning those networks to the available roles. When you configure data collection and transport for STF for an Red Hat OpenStack Platform deployment, the default network for roles is InternalApi . For Ceph, Block and Object storage roles, the default network is Storage . Because a spine-leaf configuration can result in different networks being assigned to different Leaf groupings and those names are typically unique, additional configuration is required in the parameter_defaults section of the Red Hat OpenStack Platform environment files. Procedure Document which networks are available for each of the Leaf roles. For examples of network name definitions, see Creating a network data file in the Spine Leaf Networking guide. For more information about the creation of the Leaf groupings (roles) and assignment of the networks to those groupings, see Creating a roles data file in the Spine Leaf Networking guide. Add the following configuration example to the ExtraConfig section for each of the leaf roles. In this example, internal_api_subnet is the value defined in the name_lower parameter of your network definition (with _subnet appended to the name for Leaf 0) , and is the network to which the ComputeLeaf0 leaf role is connected. In this case, the network identification of 0 corresponds to the Compute role for leaf 0, and represents a value that is different from the default internal API network name. For the ComputeLeaf0 leaf role, specify extra configuration to perform a hiera lookup to determine which network interface for a particular network to assign to the collectd AMQP host parameter. Perform the same configuration for the AMQ Interconnect listener address parameter. ComputeLeaf0ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: "%{hiera('internal_api_subnet')}" › tripleo::profile::base::metrics::qdr::listener_addr: "%{hiera('internal_api_subnet')}" Additional leaf roles typically replace _subnet with _leafN where N represents a unique indentifier for the leaf. ComputeLeaf1ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: "%{hiera('internal_api_leaf1')}" › tripleo::profile::base::metrics::qdr::listener_addr: "%{hiera('internal_api_leaf1')}" This example configuration is on a CephStorage leaf role: CephStorageLeaf0ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: "%{hiera('storage_subnet')}" › tripleo::profile::base::metrics::qdr::listener_addr: "%{hiera('storage_subnet')}" 3.3. Configuring Red Hat OpenStack Platform overcloud for Service Telemetry Framework To configure the Red Hat OpenStack Platform overcloud, you must configure the data collection applications and the data transport to STF, and deploy the overcloud. To configure the Red Hat OpenStack Platform overcloud, complete the following tasks: Section 3.3.1, "Retrieving the AMQ Interconnect route address" Section 3.3.2, "Configuring the STF connection for the overcloud" Section 3.3.3, "Validating client-side installation" 3.3.1. Retrieving the AMQ Interconnect route address When you configure the Red Hat OpenStack Platform overcloud for STF, you must provide the AMQ Interconnect route address in the STF connection file. Procedure Log in to your Red Hat OpenShift Container Platform (OCP) environment. In the service-telemetry project, retrieve the AMQ Interconnect route address: USD oc get routes -ogo-template='{{ range .items }}{{printf "%s\n" .spec.host }}{{ end }}' | grep "\-5671" stf-default-interconnect-5671-service-telemetry.apps.infra.watch Note If your STF installation differs from the documentation, ensure that you retrieve the correct AMQ Interconnect route address. 3.3.2. Configuring the STF connection for the overcloud To configure the STF connection, you must create a file that contains the connection configuration of the AMQ Interconnect for the overcloud to the STF deployment. Enable the collection of events and storage of the events in STF and deploy the overcloud. Procedure Log in to the Red Hat OpenStack Platform undercloud as the stack user. Create a configuration file called stf-connectors.yaml in the /home/stack directory. Important The Service Telemetry Operator simplifies the deployment of all data ingestion and data storage components for single cloud deployments. To share the data storage domain with multiple clouds, see Section 4.5, "Configuring multiple clouds" . In the stf-connectors.yaml file, configure the MetricsQdrConnectors address to connect the AMQ Interconnect on the overcloud to the STF deployment. Add the CeilometerQdrPublishEvents: true parameter to enable collection and transport of Ceilometer events to STF. Replace the host parameter with the value of HOST/PORT that you retrieved in Section 3.3.1, "Retrieving the AMQ Interconnect route address" : parameter_defaults: EventPipelinePublishers: [] CeilometerQdrPublishEvents: true MetricsQdrConnectors: - host: stf-default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge sslProfile: sslProfile verifyHostname: false Add the following files to your Red Hat OpenStack Platform director deployment to setup collectd and AMQ Interconnect: the stf-connectors.yaml environment file the enable-stf.yaml file that ensures that the environment is being used during the overcloud deployment the ceilometer-write-qdr.yaml file that ensures that Ceilometer telemetry is sent to STF openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates \ --environment-file <...other-environment-files...> \ --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml \ --environment-file /usr/share/openstack-tripleo-heat-templates/environments/enable-stf.yaml \ --environment-file /home/stack/stf-connectors.yaml Deploy the Red Hat OpenStack Platform overcloud. 3.3.3. Validating client-side installation To validate data collection from the STF storage domain, query the data sources for delivered data. To validate individual nodes in the Red Hat OpenStack Platform deployment, connect to the console using SSH. Procedure Log in to an overcloud node, for example, controller-0. Ensure that metrics_qdr container is running on the node: USD sudo podman container inspect --format '{{.State.Status}}' metrics_qdr running Return the internal network address on which AMQ Interconnect is running, for example, 172.17.1.44 listening on port 5666 : USD sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS } Return a list of connections to the local AMQ Interconnect: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 stf-default-interconnect-5671-service-telemetry.apps.infra.watch:443 stf-default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth There are four connections: Outbound connection to STF Inbound connection from collectd Inbound connection from ceilometer Inbound connection from our qdstat client The outbound STF connection is provided to the MetricsQdrConnectors host parameter and is the route for the STF storage domain. The other hosts are internal network addresses of the client connections to this AMQ Interconnect. To ensure that messages are being delivered, list the links, and view the _edge address in the deliv column for delivery of messages: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 2979924 0 0 0 2 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 2979924 0 0 0 0 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0 To list the addresses from Red Hat OpenStack Platform nodes to STF, connect to OCP to get the AMQ Interconnect pod name and list the connections. List the available AMQ Interconnect pods: USD oc get pods -l application=stf-default-interconnect NAME READY STATUS RESTARTS AGE stf-default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h Connect to the pod and run the qdstat --connections command to list the known connections: USD oc exec -it stf-default-interconnect-7458fd4d69-bgzfb -- qdstat --connections 2020-04-21 18:25:47.243852 UTC stf-default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication project last dlv uptime ====================================================================================================================================================================================================== 1 10.129.2.21:43062 rcv[stf-default-collectd-telemetry-smartgateway-79c967c8f7-kq4qv] normal in no-security anonymous-user 000:00:00:00 006:21:50:25 2 10.130.0.52:55754 rcv[stf-default-ceilometer-notification-smartgateway-6675df547mbjk5] normal in no-security anonymous-user 000:21:25:57 006:21:49:36 3 10.130.0.51:43110 rcv[stf-default-collectd-notification-smartgateway-698c87fbb7-f28v6] normal in no-security anonymous-user 000:21:36:53 006:21:49:09 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00 In this example, there are three edge connections from the Red Hat OpenStack Platform nodes with connection id 22, 23, and 24. To view the number of messages delivered by the network, use each address with the oc exec command: USD oc exec -it stf-default-interconnect-7458fd4d69-bgzfb -- qdstat --address 2020-04-21 18:20:10.293258 UTC stf-default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ==================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 1,553 1,553 0 0 mobile collectd/notify 0 multicast - 1 0 10 10 0 0 mobile collectd/telemetry 0 multicast - 1 0 7,798,049 7,798,049 0 0 | [
"CephStorageExtraConfig: tripleo::profile::base::metrics::collectd::amqp_host: \"%{hiera('storage')}\" tripleo::profile::base::metrics::qdr::listener_addr: \"%{hiera('storage')}\" tripleo::profile::base::ceilometer::agent::notification::notifier_host_addr: \"%{hiera('storage')}\"",
"ComputeLeaf0ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: \"%{hiera('internal_api_subnet')}\" › tripleo::profile::base::metrics::qdr::listener_addr: \"%{hiera('internal_api_subnet')}\"",
"ComputeLeaf1ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: \"%{hiera('internal_api_leaf1')}\" › tripleo::profile::base::metrics::qdr::listener_addr: \"%{hiera('internal_api_leaf1')}\"",
"CephStorageLeaf0ExtraConfig: › tripleo::profile::base::metrics::collectd::amqp_host: \"%{hiera('storage_subnet')}\" › tripleo::profile::base::metrics::qdr::listener_addr: \"%{hiera('storage_subnet')}\"",
"oc get routes -ogo-template='{{ range .items }}{{printf \"%s\\n\" .spec.host }}{{ end }}' | grep \"\\-5671\" stf-default-interconnect-5671-service-telemetry.apps.infra.watch",
"parameter_defaults: EventPipelinePublishers: [] CeilometerQdrPublishEvents: true MetricsQdrConnectors: - host: stf-default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge sslProfile: sslProfile verifyHostname: false",
"openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates --environment-file <...other-environment-files...> --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml --environment-file /usr/share/openstack-tripleo-heat-templates/environments/enable-stf.yaml --environment-file /home/stack/stf-connectors.yaml",
"sudo podman container inspect --format '{{.State.Status}}' metrics_qdr running",
"sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS }",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 stf-default-interconnect-5671-service-telemetry.apps.infra.watch:443 stf-default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 2979924 0 0 0 2 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 2979924 0 0 0 0 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0",
"oc get pods -l application=stf-default-interconnect NAME READY STATUS RESTARTS AGE stf-default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h",
"oc exec -it stf-default-interconnect-7458fd4d69-bgzfb -- qdstat --connections 2020-04-21 18:25:47.243852 UTC stf-default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication project last dlv uptime ====================================================================================================================================================================================================== 1 10.129.2.21:43062 rcv[stf-default-collectd-telemetry-smartgateway-79c967c8f7-kq4qv] normal in no-security anonymous-user 000:00:00:00 006:21:50:25 2 10.130.0.52:55754 rcv[stf-default-ceilometer-notification-smartgateway-6675df547mbjk5] normal in no-security anonymous-user 000:21:25:57 006:21:49:36 3 10.130.0.51:43110 rcv[stf-default-collectd-notification-smartgateway-698c87fbb7-f28v6] normal in no-security anonymous-user 000:21:36:53 006:21:49:09 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00",
"oc exec -it stf-default-interconnect-7458fd4d69-bgzfb -- qdstat --address 2020-04-21 18:20:10.293258 UTC stf-default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ==================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 1,553 1,553 0 0 mobile collectd/notify 0 multicast - 1 0 10 10 0 0 mobile collectd/telemetry 0 multicast - 1 0 7,798,049 7,798,049 0 0"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/service_telemetry_framework_1.0/completing-the-stf-configuration_installing-the-core-components-of-stf |
3.2. Text-Based Installer | 3.2. Text-Based Installer You can install Red Hat JBoss Data Virtualization using the text-based installer. In this mode, you run the installation steps without stepping through the graphical wizard. The GUI installer will run in text mode automatically if no display server is available. Prerequisites You must have already downloaded the Red Hat JBoss Data Virtualization jar file from the Customer Portal . Procedure 3.2. Install JBoss Data Virtualization Open a terminal window and navigate to the location where the GUI installer was downloaded. Start the installation process: Follow the installation prompts displayed on the terminal. You can either install with default configuration or you can complete additional configuration steps. Finally, generate the automatic installation script. You can use this script to perform headless installation or identical installations across multiple instances. | [
"java -jar jboss-dv- VERSION -installer.jar -console"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/installing_jboss_data_virtualization_using_text_based_installer |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/proc_providing-feedback-on-red-hat-documentation |
Appendix A. Health messages for the Ceph File System | Appendix A. Health messages for the Ceph File System Cluster health checks The Ceph Monitor daemons generate health messages in response to certain states of the Metadata Server (MDS). Below is the list of the health messages and their explanation: mds rank(s) <ranks> have failed One or more MDS ranks are not currently assigned to any MDS daemon. The storage cluster will not recover until a suitable replacement daemon starts. mds rank(s) <ranks> are damaged One or more MDS ranks has encountered severe damage to its stored metadata, and cannot start again until the metadata is repaired. mds cluster is degraded One or more MDS ranks are not currently up and running, clients might pause metadata I/O until this situation is resolved. This includes ranks being failed or damaged, and includes ranks which are running on an MDS but are not in the active state yet - for example, ranks in the replay state. mds <names> are laggy The MDS daemons are supposed to send beacon messages to the monitor in an interval specified by the mds_beacon_interval option, the default is 4 seconds. If an MDS daemon fails to send a message within the time specified by the mds_beacon_grace option, the default is 15 seconds. The Ceph Monitor marks the MDS daemon as laggy and automatically replaces it with a standby daemon if any is available. Daemon-reported health checks The MDS daemons can identify a variety of unwanted conditions, and return them in the output of the ceph status command. These conditions have human readable messages, and also have a unique code starting MDS_HEALTH , which appears in JSON output. Below is the list of the daemon messages, their codes, and explanation. "Behind on trimming... " Code: MDS_HEALTH_TRIM CephFS maintains a metadata journal that is divided into log segments. The length of journal (in number of segments) is controlled by the mds_log_max_segments setting. When the number of segments exceeds that setting, the MDS starts writing back metadata so that it can remove (trim) the oldest segments. If this process is too slow, or a software bug is preventing trimming, then this health message appears. The threshold for this message to appear is for the number of segments to be double mds_log_max_segments . "Client <name> failing to respond to capability release" Code: MDS_HEALTH_CLIENT_LATE_RELEASE, MDS_HEALTH_CLIENT_LATE_RELEASE_MANY CephFS clients are issued capabilities by the MDS. The capabilities work like locks. Sometimes, for example when another client needs access, the MDS requests clients to release their capabilities. If the client is unresponsive, it might fail to do so promptly, or fail to do so at all. This message appears if a client has taken a longer time to comply than the time specified by the mds_revoke_cap_timeout option (default is 60 seconds). "Client <name> failing to respond to cache pressure" Code: MDS_HEALTH_CLIENT_RECALL, MDS_HEALTH_CLIENT_RECALL_MANY Clients maintain a metadata cache. Items, such as inodes, in the client cache are also pinned in the MDS cache. When the MDS needs to shrink its cache to stay within its own cache size limits, the MDS sends messages to clients to shrink their caches too. If a client is unresponsive, it can prevent the MDS from properly staying within its cache size, and the MDS might eventually run out of memory and terminate unexpectedly. This message appears if a client has taken more time to comply than the time specified by the mds_recall_state_timeout option (default is 60 seconds). See Understanding MDS Cache Size Limits for details. "Client name failing to advance its oldest client/flush tid" Code: MDS_HEALTH_CLIENT_OLDEST_TID, MDS_HEALTH_CLIENT_OLDEST_TID_MANY The CephFS protocol for communicating between clients and MDS servers uses a field called oldest tid to inform the MDS of which client requests are fully complete so that the MDS can forget about them. If an unresponsive client is failing to advance this field, the MDS might be prevented from properly cleaning up resources used by client requests. This message appears if a client has more requests than the number specified by the max_completed_requests option (default is 100000) that are complete on the MDS side but have not yet been accounted for in the client's oldest tid value. "Metadata damage detected" Code: MDS_HEALTH_DAMAGE Corrupt or missing metadata was encountered when reading from the metadata pool. This message indicates that the damage was sufficiently isolated for the MDS to continue operating, although client accesses to the damaged subtree return I/O errors. Use the damage ls administration socket command to view details on the damage. This message appears as soon as any damage is encountered. "MDS in read-only mode" Code: MDS_HEALTH_READ_ONLY The MDS has entered into read-only mode and will return the EROFS error codes to client operations that attempt to modify any metadata. The MDS enters into read-only mode: If it encounters a write error while writing to the metadata pool. If the administrator forces the MDS to enter into read-only mode by using the force_readonly administration socket command. "<N> slow requests are blocked" Code: MDS_HEALTH_SLOW_REQUEST One or more client requests have not been completed promptly, indicating that the MDS is either running very slowly, or encountering a bug. Use the ops administration socket command to list outstanding metadata operations. This message appears if any client requests have taken more time than the value specified by the mds_op_complaint_time option (default is 30 seconds). "Too many inodes in cache" Code: MDS_HEALTH_CACHE_OVERSIZED The MDS has failed to trim its cache to comply with the limit set by the administrator. If the MDS cache becomes too large, the daemon might exhaust available memory and terminate unexpectedly. By default, this message appears if the MDS cache size is 50% greater than its limit. Additional Resources See the Metadata Server cache size limits section in the Red Hat Ceph Storage File System Guide for details. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/health-messages-for-the-ceph-file-system_fs |
Chapter 30. Load balancing on RHOSP | Chapter 30. Load balancing on RHOSP 30.1. Limitations of load balancer services OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) use Octavia to handle load balancer services. As a result of this choice, such clusters have a number of functional limitations. RHOSP Octavia has two supported providers: Amphora and OVN. These providers differ in terms of available features as well as implementation details. These distinctions affect load balancer services that are created on your cluster. 30.1.1. Local external traffic policies You can set the external traffic policy (ETP) parameter, .spec.externalTrafficPolicy , on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider. Having the ETP option set to Local requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that doesn't have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the create-monitor option in the cloud provider configuration to true . In RHOSP 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported. In RHOSP 16.2, the Amphora Octavia provider does not support HTTP monitors on UDP pools. As a result, UDP load balancer services have UDP-CONNECT monitors created instead. Due to implementation details, this configuration only functions properly with the OVN-Kubernetes CNI plugin. When the OpenShift SDN CNI plugin is used, the UDP services alive nodes are detected unreliably. 30.1.2. Load balancer source ranges Use the .spec.loadBalancerSourceRanges property to restrict the traffic that can pass through the load balancer according to source IP. This property is supported for use with the Amphora Octavia provider only. If your cluster uses the OVN Octavia provider, the option is ignored and traffic is unrestricted. 30.2. Using the Octavia OVN load balancer provider driver with Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver. Important Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime. Prerequisites Install the RHOSP CLI, openstack . Install the OpenShift Container Platform CLI, oc . Verify that the Octavia OVN driver on RHOSP is enabled. Tip To view a list of available Octavia drivers, on a command line, enter openstack loadbalancer provider list . The ovn driver is displayed in the command's output. Procedure To change from the Octavia Amphora provider driver to Octavia OVN: Open the kuryr-config ConfigMap. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config In the ConfigMap, delete the line that contains kuryr-octavia-provider: default . For example: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1 ... 1 Delete this line. The cluster will regenerate it with ovn as the value. Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller and kuryr-cni pods. This process might take several minutes. Verify that the kuryr-config ConfigMap annotation is present with ovn as its value. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config The ovn provider value is displayed in the output: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn ... Verify that RHOSP recreated its load balancers. On a command line, enter: USD openstack loadbalancer list | grep amphora A single Amphora load balancer is displayed. For example: a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora Search for ovn load balancers by entering: USD openstack loadbalancer list | grep ovn The remaining load balancers of the ovn type are displayed. For example: 2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn 30.3. Scaling clusters for application traffic by using Octavia OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create. If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling. If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling. 30.3.1. Scaling clusters by using Octavia If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it. Prerequisites Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure From a command line, create an Octavia load balancer that uses the Amphora driver: USD openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet> You can use a name of your choice instead of API_OCP_CLUSTER . After the load balancer becomes active, create listeners: USD openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER Note To view the status of the load balancer, enter openstack loadbalancer list . Create a pool that uses the round robin algorithm and has session persistence enabled: USD openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS To ensure that control plane machines are available, create a health monitor: USD openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443 Add the control plane machines as members of the load balancer pool: USD for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done Optional: To reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 30.3.2. Scaling clusters that use Kuryr by using Octavia Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure Optional: From a command line, to reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 30.4. Scaling for ingress traffic by using RHOSP Octavia Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your RHOSP deployment. Procedure To copy the current internal router service, on a command line, enter: USD oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml In the file external_router.yaml , change the values of metadata.name and spec.type to LoadBalancer . Example router file apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2 1 Ensure that this value is descriptive, like router-external-default . 2 Ensure that this value is LoadBalancer . Note You can delete timestamps and other information that is irrelevant to load balancing. From a command line, create a service from the external_router.yaml file: USD oc apply -f external_router.yaml Verify that the external IP address of the service is the same as the one that is associated with the load balancer: On a command line, retrieve the external IP address of the service: USD oc -n openshift-ingress get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h Retrieve the IP address of the load balancer: USD openstack loadbalancer list | grep router-external Example output | 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia | Verify that the addresses you retrieved in the steps are associated with each other in the floating IP list: USD openstack floating ip list | grep 172.30.235.33 Example output | e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c | You can now use the value of EXTERNAL-IP as the new Ingress address. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 30.5. Services for an external load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 30.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 30.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 30.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 30.5.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private | [
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1",
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn",
"openstack loadbalancer list | grep amphora",
"a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora",
"openstack loadbalancer list | grep ovn",
"2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn",
"openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>",
"openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER",
"openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS",
"openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443",
"for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP",
"oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml",
"apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2",
"oc apply -f external_router.yaml",
"oc -n openshift-ingress get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h",
"openstack loadbalancer list | grep router-external",
"| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |",
"openstack floating ip list | grep 172.30.235.33",
"| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/load-balancing-openstack |
Chapter 5. Using Container Storage Interface (CSI) | Chapter 5. Using Container Storage Interface (CSI) 5.1. Configuring CSI volumes The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage back ends that implement the CSI interface as persistent storage. Note OpenShift Container Platform 4.11 supports version 1.5.0 of the CSI specification . 5.1.1. CSI Architecture CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage back end in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver. The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster. It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar. 5.1.1.1. External CSI controllers External CSI Controllers is a deployment that deploys one or more pods with five containers: The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object. The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object. An external CSI attacher container translates attach and detach calls from OpenShift Container Platform to respective ControllerPublish and ControllerUnpublish calls to the CSI driver. An external CSI provisioner container that translates provision and delete calls from OpenShift Container Platform to respective CreateVolume and DeleteVolume calls to the CSI driver. A CSI driver container The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod. Note attach , detach , provision , and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node. Note The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OpenShift Container Platform attachment API. 5.1.1.2. CSI driver daemon set The CSI driver daemon set runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers: A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node. A CSI driver. The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Container Platform will only use the node plugin set of CSI calls such as NodePublish / NodeUnpublish and NodeStage / NodeUnstage , if these calls are implemented. 5.1.2. CSI drivers supported by OpenShift Container Platform OpenShift Container Platform installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins. To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Container Platform installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator. The following table describes the CSI drivers that are installed with OpenShift Container Platform and which CSI features they support, such as volume snapshots, cloning, and resize. Table 5.1. Supported CSI drivers and features in OpenShift Container Platform CSI driver CSI volume snapshots CSI cloning CSI resize AliCloud Disk ✅ - ✅ AWS EBS ✅ - ✅ AWS EFS - - - Google Cloud Platform (GCP) persistent disk (PD) ✅ ✅ ✅ IBM VPC Block - - ✅ Microsoft Azure Disk ✅ ✅ ✅ Microsoft Azure Stack Hub ✅ ✅ ✅ Microsoft Azure File - - ✅ OpenStack Cinder ✅ ✅ ✅ OpenShift Data Foundation ✅ ✅ ✅ OpenStack Manila ✅ - - Red Hat Virtualization (oVirt) - - ✅ VMware vSphere ✅ [1] - ✅ [2] 1. Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi. Does not support fileshare volumes. 2. Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06 Online volume expansion: minimum required vSphere version is 7.0 Update 2. Important If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features. 5.1.3. Dynamic provisioning Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Container Platform and the parameters available for configuration. The created storage class can be configured to enable dynamic provisioning. Procedure Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver. # oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name> 2 parameters: EOF 1 The name of the storage class that will be created. 2 The name of the CSI driver that has been installed 5.1.4. Example using the CSI driver The following example installs a default MySQL template without any changes to the template. Prerequisites The CSI driver has been deployed. A storage class has been created for dynamic provisioning. Procedure Create the MySQL template: # oc new-app mysql-persistent Example output --> Deploying template "openshift/mysql-persistent" to project default ... # oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s 5.2. CSI inline ephemeral volumes Container Storage Interface (CSI) inline ephemeral volumes allow you to define a Pod spec that creates inline ephemeral volumes when a pod is deployed and delete them when a pod is destroyed. This feature is only available with supported Container Storage Interface (CSI) drivers. Important CSI inline ephemeral volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.2.1. Overview of CSI inline ephemeral volumes Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination. This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume object. Inline volumes are ephemeral and do not persist across pod restarts. 5.2.1.1. Support limitations By default, OpenShift Container Platform supports CSI inline ephemeral volumes with these limitations: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. The Shared Resource CSI Driver supports inline ephemeral volumes as a Technology Preview feature. Community or storage vendors provide other CSI drivers that support these volumes. Follow the installation instructions provided by the CSI driver provider. CSI drivers might not have implemented the inline volume functionality, including Ephemeral capacity. For details, see the CSI driver documentation. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.2.2. Embedding a CSI inline ephemeral volume in the pod specification You can embed a CSI inline ephemeral volume in the Pod specification in OpenShift Container Platform. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed. Procedure Create the Pod object definition and save it to a file. Embed the CSI inline ephemeral volume in the file. my-csi-app.yaml kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-inline-vol command: [ "sleep", "1000000" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar 1 The name of the volume that is used by pods. Create the object definition file that you saved in the step. USD oc create -f my-csi-app.yaml 5.3. Shared Resource CSI Driver Operator As a cluster administrator, you can use the Shared Resource CSI Driver in OpenShift Container Platform to provision inline ephemeral volumes that contain the contents of Secret or ConfigMap objects. This way, pods and other Kubernetes types that expose volume mounts, and OpenShift Container Platform Builds can securely use the contents of those objects across potentially any namespace in the cluster. To accomplish this, there are currently two types of shared resources: a SharedSecret custom resource for Secret objects, and a SharedConfigMap custom resource for ConfigMap objects. Important The Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note To enable the Shared Resource CSI Driver, you must enable features using feature gates 5.3.1. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.3.2. Sharing secrets across namespaces To share a secret across namespaces in a cluster, you create a SharedSecret custom resource (CR) instance for the Secret object that you want to share. Prerequisites You must have permission to perform the following actions: Create instances of the sharedsecrets.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level. Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances. Manage roles and role bindings to control whether the service account specified by a pod can mount a Container Storage Interface (CSI) volume that references the SharedSecret CR instance you want to use. Access the namespaces that contain the Secrets you want to share. Procedure Create a SharedSecret CR instance for the Secret object you want to share across namespaces in the cluster: USD oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF 5.3.3. Using a SharedSecret instance in a pod To access a SharedSecret custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedSecret CR instance. Prerequisites You have created a SharedSecret CR instance for the secret you want to share across namespaces in the cluster. You must have permission to perform the following actions Create build configs and start builds. Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the builder service accounts available to you in your namespace are allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Grant a given service account RBAC permissions to use the SharedSecret CR instance in its pod by using oc apply with YAML content: Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming SharedSecret CR instances. USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: USD oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Access the SharedSecret CR instance from a pod: USD oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default # containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF 5.3.4. Sharing a config map across namespaces To share a config map across namespaces in a cluster, you create a SharedConfigMap custom resource (CR) instance for that config map. Prerequisites You must have permission to perform the following actions: Create instances of the sharedconfigmaps.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level. Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances. Manage roles and role bindings across the namespaces in the cluster to control which service accounts in pods that mount your Container Storage Interface (CSI) volume can use those instances. Access the namespaces that contain the Secrets you want to share. Procedure Create a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster: USD oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF 5.3.5. Using a SharedConfigMap instance in a pod steps To access a SharedConfigMap custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedConfigMap CR instance. Prerequisites You have created a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster. You must have permission to perform the following actions: Create build configs and start builds. Discover which SharedConfigMap CR instances are available by entering the oc get sharedconfigmaps command and getting a non-empty list back. Determine if the builder service accounts available to you in your namespace are allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedConfigMap CR instances and enable service accounts to use SharedConfigMap CR instances. Procedure Grant a given service account RBAC permissions to use the SharedConfigMap CR instance in its pod by using oc apply with YAML content. Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming a SharedConfigMap CR instance. USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Access the SharedConfigMap CR instance from a pod: USD oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default # containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF 5.3.6. Additional support limitations for the Shared Resource CSI Driver The Shared Resource CSI Driver has the following noteworthy limitations: The driver is subject to the limitations of Container Storage Interface (CSI) inline ephemeral volumes. The value of the readOnly field must be true . Otherwise, on volume provisioning during pod startup, the driver returns an error to the kubelet. This limitation is in keeping with proposed best practices for the upstream Kubernetes CSI Driver to apply SELinux labels to associated volumes. The driver ignores the FSType field because it only supports tmpfs volumes. The driver ignores the NodePublishSecretRef field. Instead, it uses SubjectAccessReviews with the use verb to evaluate whether a pod can obtain a volume that contains SharedSecret or SharedConfigMap custom resource (CR) instances. 5.3.7. Additional details about VolumeAttributes on shared resource pod volumes The following attributes affect shared resource pod volumes in various ways: The refreshResource attribute in the volumeAttributes properties. The refreshResources attribute in the Shared Resource CSI Driver configuration. The sharedSecret and sharedConfigMap attributes in the volumeAttributes properties. 5.3.7.1. The refreshResource attribute The Shared Resource CSI Driver honors the refreshResource attribute in volumeAttributes properties of the volume. This attribute controls whether updates to the contents of the underlying Secret or ConfigMap object are copied to the volume after the volume is initially provisioned as part of pod startup. The default value of refreshResource is true , which means that the contents are updated. Important If the Shared Resource CSI Driver configuration has disabled the refreshing of both the shared SharedSecret and SharedConfigMap custom resource (CR) instances, then the refreshResource attribute in the volumeAttribute properties has no effect. The intent of this attribute is to disable refresh for specific volume mounts when refresh is generally allowed. 5.3.7.2. The refreshResources attribute You can use a global switch to enable or disable refreshing of shared resources. This switch is the refreshResources attribute in the csi-driver-shared-resource-config config map for the Shared Resource CSI Driver, which you can find in the openshift-cluster-csi-drivers namespace. If you set this refreshResources attribute to false , none of the Secret or ConfigMap object-related content stored in the volume is updated after the initial provisioning of the volume. Important Using this Shared Resource CSI Driver configuration to disable refreshing affects all the cluster's volume mounts that use the Shared Resource CSI Driver, regardless of the refreshResource attribute in the volumeAttributes properties of any of those volumes. 5.3.7.3. Validation of volumeAttributes before provisioning a shared resource volume for a pod In the volumeAttributes of a single volume, you must set either a sharedSecret or a sharedConfigMap attribute to the value of a SharedSecret or a SharedConfigMap CS instance. Otherwise, when the volume is provisioned during pod startup, a validation checks the volumeAttributes of that volume and returns an error to the kubelet under the following conditions: Both sharedSecret and sharedConfigMap attributes have specified values. Neither sharedSecret nor sharedConfigMap attributes have specified values. The value of the sharedSecret or sharedConfigMap attribute does not correspond to the name of a SharedSecret or SharedConfigMap CR instance on the cluster. 5.3.8. Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds makes using Red Hat subscriptions (RHEL entitlements) easier in OpenShift Container Platform Builds. Previously, in OpenShift Container Platform 4.9.x and earlier, you manually imported your credentials and copied them to each project or namespace where you were running builds. Now, in OpenShift Container Platform 4.10 and later, OpenShift Container Platform Builds can use Red Hat subscriptions (RHEL entitlements) by referencing shared resources and the simple content access feature provided by Insights Operator: The simple content access feature imports your subscription credentials to a well-known Secret object. See the links in the following "Additional resources" section. The cluster administrator creates a SharedSecret custom resource (CR) instance around that Secret object and grants permission to particular projects or namespaces. In particular, the cluster administrator gives the builder service account permission to use that SharedSecret CR instance. Builds that run within those projects or namespaces can mount a CSI Volume that references the SharedSecret CR instance and its entitled RHEL content. Additional resources Importing simple content access certificates with Insights Operator Adding subscription entitlements as a build secret 5.4. CSI volume snapshots This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested. 5.4.1. Overview of CSI volume snapshots A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume. OpenShift Container Platform supports Container Storage Interface (CSI) volume snapshots by default. However, a specific CSI driver is required. With CSI volume snapshots, a cluster administrator can: Deploy a third-party CSI driver that supports snapshots. Create a new persistent volume claim (PVC) from an existing volume snapshot. Take a snapshot of an existing PVC. Restore a snapshot as a different PVC. Delete an existing volume snapshot. With CSI volume snapshots, an app developer can: Use volume snapshots as building blocks for developing application- or cluster-level storage backup solutions. Rapidly rollback to a development version. Use storage more efficiently by not having to make a full copy each time. Be aware of the following when using volume snapshots: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. OpenShift Container Platform only ships with select CSI drivers. For CSI drivers that are not provided by an OpenShift Container Platform Driver Operator, it is recommended to use the CSI drivers provided by community or storage vendors . Follow the installation instructions furnished by the CSI driver provider. CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the csi-external-snapshotter sidecar. See documentation provided by the CSI driver for details. 5.4.2. CSI snapshot controller and sidecar OpenShift Container Platform provides a snapshot controller that is deployed into the control plane. In addition, your CSI driver vendor provides the CSI snapshot sidecar as a helper container that is installed during the CSI driver installation. The CSI snapshot controller and sidecar provide volume snapshotting through the OpenShift Container Platform API. These external components run in the cluster. The external controller is deployed by the CSI Snapshot Controller Operator. 5.4.2.1. External controller The CSI snapshot controller binds VolumeSnapshot and VolumeSnapshotContent objects. The controller manages dynamic provisioning by creating and deleting VolumeSnapshotContent objects. 5.4.2.2. External sidecar Your CSI driver vendor provides the csi-external-snapshotter sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering CreateSnapshot and DeleteSnapshot operations. Follow the installation instructions provided by your vendor. 5.4.3. About the CSI Snapshot Controller Operator The CSI Snapshot Controller Operator runs in the openshift-cluster-storage-operator namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the openshift-cluster-storage-operator namespace. 5.4.3.1. Volume snapshot CRDs During OpenShift Container Platform installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the snapshot.storage.k8s.io/v1 API group: VolumeSnapshotContent A snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator. Similar to the PersistentVolume object, the VolumeSnapshotContent CRD is a cluster resource that points to a real snapshot in the storage back end. For manually pre-provisioned snapshots, a cluster administrator creates a number of VolumeSnapshotContent CRDs. These carry the details of the real volume snapshot in the storage system. The VolumeSnapshotContent CRD is not namespaced and is for use by a cluster administrator. VolumeSnapshot Similar to the PersistentVolumeClaim object, the VolumeSnapshot CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a VolumeSnapshot CRD with an appropriate VolumeSnapshotContent CRD. The binding is a one-to-one mapping. The VolumeSnapshot CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot. VolumeSnapshotClass Allows a cluster administrator to specify different attributes belonging to a VolumeSnapshot object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. The VolumeSnapshotClass CRD defines the parameters for the csi-external-snapshotter sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported. Dynamically provisioned snapshots use the VolumeSnapshotClass CRD to specify storage-provider-specific parameters to use when creating a snapshot. The VolumeSnapshotContentClass CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end. 5.4.4. Volume snapshot provisioning There are two ways to provision snapshots: dynamically and manually. 5.4.4.1. Dynamic provisioning Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a VolumeSnapshotClass CRD. 5.4.4.2. Manual provisioning As a cluster administrator, you can manually pre-provision a number of VolumeSnapshotContent objects. These carry the real volume snapshot details available to cluster users. 5.4.5. Creating a volume snapshot When you create a VolumeSnapshot object, OpenShift Container Platform creates a volume snapshot. Prerequisites Logged in to a running OpenShift Container Platform cluster. A PVC created using a CSI driver that supports VolumeSnapshot objects. A storage class to provision the storage back end. No pods are using the persistent volume claim (PVC) that you want to take a snapshot of. Note Do not create a volume snapshot of a PVC if a pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Be sure to first tear down a running pod to ensure consistent snapshots. Procedure To dynamically create a volume snapshot: Create a file with the VolumeSnapshotClass object described by the following YAML: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete 1 The name of the CSI driver that is used to create snapshots of this VolumeSnapshotClass object. The name must be the same as the Provisioner field of the storage class that is responsible for the PVC that is being snapshotted. Note Depending on the driver that you used to configure persistent storage, additional parameters might be required. You can also use an existing VolumeSnapshotClass object. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshotclass.yaml Create a VolumeSnapshot object: volumesnapshot-dynamic.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2 1 The request for a particular class by the volume snapshot. If the volumeSnapshotClassName setting is absent and there is a default volume snapshot class, a snapshot is created with the default volume snapshot class name. But if the field is absent and no default volume snapshot class exists, then no snapshot is created. 2 The name of the PersistentVolumeClaim object bound to a persistent volume. This defines what you want to create a snapshot of. Required for dynamically provisioning a snapshot. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-dynamic.yaml To manually provision a snapshot: Provide a value for the volumeSnapshotContentName parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above. volumesnapshot-manual.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1 1 The volumeSnapshotContentName parameter is required for pre-provisioned snapshots. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-manual.yaml Verification After the snapshot has been created in the cluster, additional details about the snapshot are available. To display details about the volume snapshot that was created, enter the following command: USD oc describe volumesnapshot mysnap The following example displays details about the mysnap volume snapshot: volumesnapshot.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: "2020-01-29T12:24:30Z" 2 readyToUse: true 3 restoreSize: 500Mi 1 The pointer to the actual storage content that was created by the controller. 2 The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time. 3 If the value is set to true , the snapshot can be used to restore as a new PVC. If the value is set to false , the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes. To verify that the volume snapshot was created, enter the following command: USD oc get volumesnapshotcontent The pointer to the actual content is displayed. If the boundVolumeSnapshotContentName field is populated, a VolumeSnapshotContent object exists and the snapshot was created. To verify that the snapshot is ready, confirm that the VolumeSnapshot object has readyToUse: true . 5.4.6. Deleting a volume snapshot You can configure how OpenShift Container Platform deletes volume snapshots. Procedure Specify the deletion policy that you require in the VolumeSnapshotClass object, as shown in the following example: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1 1 When deleting the volume snapshot, if the Delete value is set, the underlying snapshot is deleted along with the VolumeSnapshotContent object. If the Retain value is set, both the underlying snapshot and VolumeSnapshotContent object remain. If the Retain value is set and the VolumeSnapshot object is deleted without deleting the corresponding VolumeSnapshotContent object, the content remains. The snapshot itself is also retained in the storage back end. Delete the volume snapshot by entering the following command: USD oc delete volumesnapshot <volumesnapshot_name> Example output volumesnapshot.snapshot.storage.k8s.io "mysnapshot" deleted If the deletion policy is set to Retain , delete the volume snapshot content by entering the following command: USD oc delete volumesnapshotcontent <volumesnapshotcontent_name> Optional: If the VolumeSnapshot object is not successfully deleted, enter the following command to remove any finalizers for the leftover resource so that the delete operation can continue: Important Only remove the finalizers if you are confident that there are no existing references from either persistent volume claims or volume snapshot contents to the VolumeSnapshot object. Even with the --force option, the delete operation does not delete snapshot objects until all finalizers are removed. USD oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{"metadata": {"finalizers":null}}' Example output volumesnapshotclass.snapshot.storage.k8s.io "csi-ocs-rbd-snapclass" deleted The finalizers are removed and the volume snapshot is deleted. 5.4.7. Restoring a volume snapshot The VolumeSnapshot CRD content can be used to restore the existing volume to a state. After your VolumeSnapshot CRD is bound and the readyToUse value is set to true , you can use that resource to provision a new volume that is pre-populated with data from the snapshot. .Prerequisites * Logged in to a running OpenShift Container Platform cluster. * A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports volume snapshots. * A storage class to provision the storage back end. * A volume snapshot has been created and is ready to use. Procedure Specify a VolumeSnapshot data source on a PVC as shown in the following: pvc-restore.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Name of the VolumeSnapshot object representing the snapshot to use as source. 2 Must be set to the VolumeSnapshot value. 3 Must be set to the snapshot.storage.k8s.io value. Create a PVC by entering the following command: USD oc create -f pvc-restore.yaml Verify that the restored PVC has been created by entering the following command: USD oc get pvc A new PVC such as myclaim-restore is displayed. 5.5. CSI volume cloning Volume cloning duplicates an existing persistent volume to help protect against data loss in OpenShift Container Platform. This feature is only available with supported Container Storage Interface (CSI) drivers. You should be familiar with persistent volumes before you provision a CSI volume clone. 5.5.1. Overview of CSI volume cloning A Container Storage Interface (CSI) volume clone is a duplicate of an existing persistent volume at a particular point in time. Volume cloning is similar to volume snapshots, although it is more efficient. For example, a cluster administrator can duplicate a cluster volume by creating another instance of the existing cluster volume. Cloning creates an exact duplicate of the specified volume on the back-end device, rather than creating a new empty volume. After dynamic provisioning, you can use a volume clone just as you would use any standard volume. No new API objects are required for cloning. The existing dataSource field in the PersistentVolumeClaim object is expanded so that it can accept the name of an existing PersistentVolumeClaim in the same namespace. 5.5.1.1. Support limitations By default, OpenShift Container Platform supports CSI volume cloning with these limitations: The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC. Cloning is supported with a different Storage Class. Destination volume can be the same for a different storage class as the source. You can use the default storage class and omit storageClassName in the spec . Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. CSI drivers might not have implemented the volume cloning functionality. For details, see the CSI driver documentation. 5.5.2. Provisioning a CSI volume clone When you create a cloned persistent volume claim (PVC) API object, you trigger the provisioning of a CSI volume clone. The clone pre-populates with the contents of another PVC, adhering to the same rules as any other persistent volume. The one exception is that you must add a dataSource that references an existing PVC in the same namespace. Prerequisites You are logged in to a running OpenShift Container Platform cluster. Your PVC is created using a CSI driver that supports volume cloning. Your storage back end is configured for dynamic provisioning. Cloning support is not available for static provisioners. Procedure To clone a PVC from an existing PVC: Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-clone.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1 1 The name of the storage class that provisions the storage back end. The default storage class can be used and storageClassName can be omitted in the spec. Create the object you saved in the step by running the following command: USD oc create -f pvc-clone.yaml A new PVC pvc-1-clone is created. Verify that the volume clone was created and is ready by running the following command: USD oc get pvc pvc-1-clone The pvc-1-clone shows that it is Bound . You are now ready to use the newly cloned PVC to configure a pod. Create and save a file with the Pod object described by the YAML. For example: kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1 1 The cloned PVC created during the CSI volume cloning operation. The created Pod object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original dataSource PVC. 5.6. CSI automatic migration In-tree storage drivers that are traditionally shipped with OpenShift Container Platform are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. OpenShift Container Platform provides automatic migration for certain supported in-tree volume plugins to their equivalent CSI drivers. 5.6.1. Overview Volumes that are provisioned by using in-tree storage plugins, and that are supported by this feature, are migrated to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; OpenShift Container Platform only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed. The following in-tree to CSI drivers are supported: Table 5.2. CSI automatic migration feature supported in-tree/CSI drivers In-tree/CSI drivers Support level CSI auto migration enabled automatically? Azure Disk OpenStack Cinder Generally available (GA) Yes. For more information, see " Automatic migration of in-tree volumes to CSI ". Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure File Google Compute Engine Persistent Disk (in-tree) and Google Cloud Platform Persistent Disk (CSI) VMware vSphere Technology Preview (TP) No. To enable, see " Manually enabling CSI automatic migration ". CSI automatic migration should be seamless. This feature does not change how you use all existing API objects: for example, PersistentVolumes , PersistentVolumeClaims , and StorageClasses . Enabling CSI automatic migration for in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plugin did not support it. Additional resources Automatic migration of in-tree volumes to CSI Manually enabling CSI automatic migration 5.6.2. Automatic migration of in-tree volumes to CSI OpenShift Container Platform supports automatic and seamless migration for the following in-tree volume types to their Container Storage Interface (CSI) driver counterpart: Azure Disk OpenStack Cinder CSI migration for these volume types is considered generally available (GA), and requires no manual intervention. For new OpenShift Container Platform 4.11, and later, installations, the default storage class is the CSI storage class. All volumes provisioned using this storage class are CSI persistent volumes (PVs). For clusters upgraded from 4.10, and earlier, to 4.11, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plugin will continue working, we recommend that you switch the default storage class to the CSI storage class. 5.6.3. Manually enabling CSI automatic migration If you want to test Container Storage Interface (CSI) migration in development or staging OpenShift Container Platform clusters, you must manually enable in-tree to CSI migration for the following in-tree volume types: AWS Elastic Block Storage (EBS) Google Compute Engine Persistent Disk (GCE-PD) VMware vSphere Disk Azure File Important CSI automatic migration for the preceding in-tree volume plugins and CSI driver pairs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After migration, the default storage class remains the in-tree storage class. CSI automatic migration will be enabled by default for all storage in-tree plugins in a future OpenShift Container Platform release, so it is highly recommended that you test it now and report any issues. Note Enabling CSI automatic migration drains, and then restarts, all nodes in the cluster in sequence. This might take some time. Procedure Enable feature gates (see Nodes Working with clusters Enabling features using feature gates ). Important After turning on Technology Preview features using feature gates, they cannot be turned off. As a result, cluster upgrades are prevented. The following configuration example enables CSI automatic migration for all CSI drivers supported by this feature that are currently in Technology Preview (TP) status: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1 ... 1 Enables automatic migration for AWS EBS, GCP, Azure File, and VMware vSphere. You can specify CSI automatic migration for a selected CSI driver by setting CustomNoUpgrade featureSet and for featuregates to one of the following: CSIMigrationAWS CSIMigrationAzureFile CSIMigrationGCE CSIMigrationvSphere The following configuration example enables automatic migration to the AWS EBS CSI driver only: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: CustomNoUpgrade customNoUpgrade: enabled: - CSIMigrationAWS 1 ... 1 Enables automatic migration for AWS EBS only. Additional resources Enabling features using feature gates 5.7. AliCloud Disk CSI Driver Operator 5.7.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Alibaba AliCloud Disk Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to AliCloud Disk storage assets, OpenShift Container Platform installs the AliCloud Disk CSI Driver Operator and the AliCloud Disk CSI driver, by default, in the openshift-cluster-csi-drivers namespace. The AliCloud Disk CSI Driver Operator provides a storage class ( alicloud-disk ) that you can use to create persistent volume claims (PVCs). The AliCloud Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. The AliCloud Disk CSI driver enables you to create and mount AliCloud Disk PVs. 5.7.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Configuring CSI volumes 5.8. AWS Elastic Block Store CSI Driver Operator 5.8.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic Block Store (EBS). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Container Platform installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace. The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in Persistent storage using AWS Elastic Block Store . The AWS EBS CSI driver enables you to create and mount AWS EBS PVs. Note If you installed the AWS EBS CSI Operator and driver on an OpenShift Container Platform 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to OpenShift Container Platform 4.11. 5.8.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Container Platform, see Persistent storage using AWS Elastic Block Store . Additional resources Persistent storage using AWS Elastic Block Store Configuring CSI volumes 5.9. AWS Elastic File Service CSI Driver Operator 5.9.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. After installing the AWS EFS CSI Driver Operator, OpenShift Container Platform installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets. The AWS EFS CSI Driver Operator , after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass . The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage. The AWS EFS CSI driver enables you to create and mount AWS EFS PVs. Note AWS EFS only supports regional volumes, not zonal volumes. 5.9.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.9.3. Installing the AWS EFS CSI Driver Operator The AWS EFS CSI Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster. Prerequisites Access to the OpenShift Container Platform web console. Procedure To install the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Install the AWS EFS CSI Operator: Click Operators OperatorHub . Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box. Click the AWS EFS CSI Driver Operator button. Important Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator . The AWS EFS Operator is a community Operator and is not supported by Red Hat. On the AWS EFS CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console. If you are using AWS EFS with AWS Security Token Service (STS), you must configure the AWS EFS CSI Driver with STS. For more information, see "Configuring AWS EFS CSI Driver with STS". Install the AWS EFS CSI Driver: Click administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed Click Create . Wait for the following Conditions to change to a "true" status: AWSEFSDriverCredentialsRequestControllerAvailable AWSEFSDriverNodeServiceControllerAvailable AWSEFSDriverControllerServiceControllerAvailable Additional resources Configuring AWS EFS CSI Driver with STS 5.9.4. Configuring AWS EFS CSI Driver Operator with Security Token Service This procedure explains how to configure the AWS EFS CSI Driver Operator with OpenShift Container Platform on AWS Security Token Service (STS). Perform this procedure after installing the AWS EFS CSI Operator, but before installing the AWS EFS CSI driver as part of Installing the AWS EFS CSI Driver Operator procedure. If you perform this procedure after installing the driver and creating volumes, your volumes will fail to mount into pods. Prerequisites AWS account credentials Procedure To configure the AWS EFS CSI Driver Operator with STS: Extract the CCO utility ( ccoctl ) binary from the OpenShift Container Platform release image, which you used to install the cluster with STS. For more information, see "Configuring the Cloud Credential Operator utility". Create and save an EFS CredentialsRequest YAML file, such as shown in the following example, and then place it in the credrequests directory: Example apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa Run the ccoctl tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system ( <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml ). USD ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com name=<name> is the name used to tag any cloud resources that are created for tracking. region=<aws_region> is the AWS region where cloud resources are created. dir=<path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the EFS CredentialsRequest file in step. <aws_account_id> is the AWS account ID. Example USD ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com Example output 2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud- Create the AWS EFS cloud credentials and secret: USD oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml Example USD oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml Example output secret/aws-efs-cloud-credentials created Additional resources Installing the AWS EFS CSI Driver Operator Configuring the Cloud Credential Operator utility 5.9.5. Creating the AWS EFS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. The AWS EFS CSI Driver Operator , after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class. 5.9.5.1. Creating the AWS EFS storage class using the console Procedure In the OpenShift Container Platform console, click Storage StorageClasses . On the StorageClasses page, click Create StorageClass . On the StorageClass page, perform the following steps: Enter a name to reference the storage class. Optional: Enter the description. Select the reclaim policy. Select efs.csi.aws.com from the Provisioner drop-down list. Optional: Set the configuration parameters for the selected provisioner. Click Create . 5.9.5.2. Creating the AWS EFS storage class using the CLI Procedure Create a StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: "700" 3 gidRangeStart: "1000" 4 gidRangeEnd: "2000" 5 basePath: "/dynamic_provisioning" 6 1 provisioningMode must be efs-ap to enable dynamic provisioning. 2 fileSystemId must be the ID of the EFS volume created manually. 3 directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner. 4 5 gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range. 6 basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as "/dynamic_provisioning/<random uuid>" on the EFS volume. Only the subdirectory is mounted to pods that use the PV. Note A cluster admin can create several StorageClass objects, each using a different EFS volume. 5.9.6. Creating and configuring access to EFS volumes in AWS This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OpenShift Container Platform. Prerequisites AWS account credentials Procedure To create and configure access to an EFS volume in AWS: On the AWS console, open https://console.aws.amazon.com/efs . Click Create file system : Enter a name for the file system. For Virtual Private Cloud (VPC) , select your OpenShift Container Platform's' virtual private cloud (VPC). Accept default settings for all other selections. Wait for the volume and mount targets to finish being fully created: Go to https://console.aws.amazon.com/efs#/file-systems . Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes). On the Network tab, copy the Security Group ID (you will need this in the step). Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups , and find the Security Group used by the EFS volume. On the Inbound rules tab, click Edit inbound rules , and then add a new rule with the following settings to allow OpenShift Container Platform nodes to access EFS volumes : Type : NFS Protocol : TCP Port range : 2049 Source : Custom/IP address range of your nodes (for example: "10.0.0.0/16") This step allows OpenShift Container Platform to use NFS ports from the cluster. Save the rule. 5.9.7. Dynamic provisioning for AWS EFS The AWS EFS CSI Driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass /EFS volume. Important Note that PVC.spec.resources is not enforced by EFS. In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume. Using monitoring of EFS volume sizes in AWS is strongly recommended. Prerequisites You have created AWS EFS volumes. You have created the AWS EFS storage class. Procedure To enable dynamic provisioning: Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created above. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting . Additional resources Creating and configuring access to AWS EFS volume(s) Creating the AWS EFS storage class 5.9.8. Creating static PVs with AWS EFS It is possible to use an AWS EFS volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods. Prerequisites You have created AWS EFS volumes. Procedure Create the PV using the following YAML file: apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: "false" 3 1 spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume. 2 volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID> . For example: fs-6e633ada::fsap-081a1d293f0004630 . 3 If desired, you can disable encryption in transit. Encryption is enabled by default. If you have problems setting up static PVs, see AWS EFS troubleshooting . 5.9.9. AWS EFS security The following information is important for AWS EFS security. When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client's IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html . As a consequence, EFS volumes silently ignore FSGroup; OpenShift Container Platform is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it. Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html . 5.9.10. AWS EFS troubleshooting The following information provides guidance on how to troubleshoot issues with AWS EFS: The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers . To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command: USD oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created To show AWS EFS Operator errors, view the ClusterCSIDriver status: USD oc get clustercsidriver efs.csi.aws.com -o yaml If a volume cannot be mounted to a pod (as shown in the output of the following command): USD oc describe pod ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition 1 Warning message indicating volume not mounted. This error is frequently caused by AWS dropping packets between an OpenShift Container Platform node and AWS EFS. Check that the following are correct: AWS firewall and Security Groups Networking: port number and IP addresses 5.9.11. Uninstalling the AWS EFS CSI Driver Operator All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator. Prerequisites Access to the OpenShift Container Platform web console. Procedure To uninstall the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Stop all applications that use AWS EFS PVs. Delete all AWS EFS PVs: Click Storage PersistentVolumeClaims . Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims . Uninstall the AWS EFS CSI Driver: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for efs.csi.aws.com , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Uninstall the AWS EFS CSI Operator: Click Operators Installed Operators . On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console. Note Before you can destroy a cluster ( openshift-install destroy cluster ), you must delete the EFS volume in AWS. An OpenShift Container Platform cluster cannot be destroyed when there is an EFS volume that uses the cluster's VPC. Amazon does not allow deletion of such a VPC. 5.9.12. Additional resources Configuring CSI volumes 5.10. Azure Disk CSI Driver Operator 5.10.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure Disk storage assets, OpenShift Container Platform installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure Disk CSI Driver Operator provides a storage class named managed-csi that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. The Azure Disk CSI driver enables you to create and mount Azure Disk PVs. 5.10.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in later versions of OpenShift Container Platform. 5.10.3. Creating a storage class with storage account type Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, you can obtain dynamically provisioned persistent volumes. When creating a storage class, you can designate the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Standard_LRS , Premium_LRS , StandardSSD_LRS , UltraSSD_LRS , Premium_ZRS , and StandardSSD_ZRS . For information about finding your Azure SKU tier, see SKU Types . ZRS has some region limitations. For information about these limitations, see ZRS limitations . Prerequisites Access to an OpenShift Container Platform cluster with administrator rights Procedure Use the following steps to create a storage class with a storage account type. Create a storage class designating the storage account type using a YAML file similar to the following: USD oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF 1 Storage class name. 2 Storage account type. This corresponds to your Azure storage account SKU tier:`Standard_LRS`, Premium_LRS , StandardSSD_LRS , UltraSSD_LRS , Premium_ZRS , StandardSSD_ZRS . Ensure that the storage class was created by listing the storage classes: USD oc get storageclass Example output USD oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1 1 New storage class with storage account type. 5.10.4. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using in-tree PVCs Machine sets that deploy machines on ultra disks as data disks 5.10.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: ... spec: metadata: ... labels: ... disk: ultrassd 1 ... providerSpec: value: ... ultraSSDCapability: Enabled 2 ... 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 5.10.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 5.10.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 5.10.5. Additional resources Persistent storage using Azure Disk Configuring CSI volumes 5.11. Azure File CSI Driver Operator 5.11.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure File storage assets, OpenShift Container Platform installs the Azure File CSI Driver Operator and the Azure File CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure File CSI Driver Operator provides a storage class that is named azurefile-csi that you can use to create persistent volume claims (PVCs). The Azure File CSI driver enables you to create and mount Azure File PVs. The Azure File CSI driver supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. Azure File CSI Driver Operator does not support: Virtual hard disks (VHD) Network File System (NFS): OpenShift Container Platform does not deploy a NFS-backed storage class. Running on nodes with FIPS mode enabled. For more information about supported features, see Supported CSI drivers and features . 5.11.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Persistent storage using Azure File Configuring CSI volumes 5.12. Azure Stack Hub CSI Driver Operator 5.12.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure Stack Hub Storage. Azure Stack Hub, which is part of the Azure Stack portfolio, allows you to run apps in an on-premises environment and deliver Azure services in your datacenter. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure Stack Hub storage assets, OpenShift Container Platform installs the Azure Stack Hub CSI Driver Operator and the Azure Stack Hub CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure Stack Hub CSI Driver Operator provides a storage class ( managed-csi ), with "Standard_LRS" as the default storage account type, that you can use to create persistent volume claims (PVCs). The Azure Stack Hub CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. The Azure Stack Hub CSI driver enables you to create and mount Azure Stack Hub PVs. 5.12.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.12.3. Additional resources Configuring CSI volumes 5.13. GCP PD CSI Driver Operator 5.13.1. Overview OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Container Platform installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace. GCP PD CSI Driver Operator : By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk . GCP PD driver : The driver enables you to create and mount GCP PD PVs. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision GCP PD storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. 5.13.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.13.3. GCP PD CSI driver storage class parameters The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation. The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Container Platform. Table 5.3. CreateVolume Parameters Parameter Values Default Description type pd-ssd or pd-standard pd-standard Allows you to choose between standard PVs or solid-state-drive PVs. replication-type none or regional-pd none Allows you to choose between zonal or regional PVs. disk-encryption-kms-key Fully qualified resource identifier for the key to use to encrypt new disks. Empty string Uses customer-managed encryption keys (CMEK) to encrypt new disks. 5.13.4. Creating a custom-encrypted persistent volume When you create a PersistentVolumeClaim object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV. For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key. Prerequisites You are logged in to a running OpenShift Container Platform cluster. You have created a Cloud KMS key ring and key version. For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK) . Procedure To create a custom-encrypted PV, complete the following steps: Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: "WaitForFirstConsumer" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1 1 This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource's ID and Getting a Cloud KMS resource ID . Note You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io . Deploy the storage class on your OpenShift Container Platform cluster using the oc command: USD oc describe storageclass csi-gce-pd-cmek Example output Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none Create a file named pvc.yaml that matches the name of your storage class object that you created in the step: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi Note If you marked the new storage class as default, you can omit the storageClassName field. Apply the PVC on your cluster: USD oc apply -f pvc.yaml Get the status of your PVC and verify that it is created and bound to a newly provisioned PV: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s Note If your storage class has the volumeBindingMode field set to WaitForFirstConsumer , you must create a pod to use the PVC before you can verify it. Your CMEK-protected PV is now ready to use with your OpenShift Container Platform cluster. Additional resources Persistent storage using GCE Persistent Disk Configuring CSI volumes 5.14. IBM VPC Block CSI Driver Operator 5.14.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM Virtual Private Cloud (VPC) Block Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to IBM VPC Block storage assets, OpenShift Container Platform installs the IBM VPC Block CSI Driver Operator and the IBM VPC Block CSI driver by default in the openshift-cluster-csi-drivers namespace. The IBM VPC Block CSI Driver Operator provides three storage classes named ibmc-vpc-block-10iops-tier (default), ibmc-vpc-block-5iops-tier , and ibmc-vpc-block-custom for different tiers that you can use to create persistent volume claims (PVCs). The IBM VPC Block CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. The IBM VPC Block CSI driver enables you to create and mount IBM VPC Block PVs. 5.14.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Configuring CSI volumes 5.15. OpenStack Cinder CSI Driver Operator 5.15.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, OpenShift Container Platform installs the OpenStack Cinder CSI Driver Operator and the OpenStack Cinder CSI driver in the openshift-cluster-csi-drivers namespace. The OpenStack Cinder CSI Driver Operator provides a CSI storage class that you can use to create PVCs. The OpenStack Cinder CSI driver enables you to create and mount OpenStack Cinder PVs. For OpenShift Container Platform, automatic migration from OpenStack Cinder in-tree to the CSI driver is available as a Technology Preview (TP) feature. With migration enabled, volumes provisioned using the existing in-tree plugin are automatically migrated to use the OpenStack Cinder CSI driver. For more information, see CSI automatic migration feature . 5.15.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. 5.15.3. Making OpenStack Cinder CSI the default storage class The OpenStack Cinder CSI driver uses the cinder.csi.openstack.org parameter key to support dynamic provisioning. To enable OpenStack Cinder CSI provisioning in OpenShift Container Platform, it is recommended that you overwrite the default in-tree storage class with standard-csi . Alternatively, you can create the persistent volume claim (PVC) and specify the storage class as "standard-csi". In OpenShift Container Platform, the default storage class references the in-tree Cinder driver. However, with CSI automatic migration enabled, volumes created using the default storage class actually use the CSI driver. Procedure Use the following steps to apply the standard-csi storage class by overwriting the default in-tree storage class. List the storage class: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class, as shown in the following example: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true . USD oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify that the PVC is now referencing the CSI storage class by default: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h Optional: You can define a new PVC without having to specify the storage class: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi A PVC that does not specify a specific storage class is automatically provisioned by using the default storage class. Optional: After the new file has been configured, create it in your cluster: USD oc create -f cinder-claim.yaml Additional resources Configuring CSI volumes 5.16. OpenStack Manila CSI Driver Operator 5.16.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for the OpenStack Manila shared file system service. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to Manila storage assets, OpenShift Container Platform installs the Manila CSI Driver Operator and the Manila CSI driver by default on any OpenStack cluster that has the Manila service enabled. The Manila CSI Driver Operator creates the required storage class that is needed to create PVCs for all available Manila share types. The Operator is installed in the openshift-cluster-csi-drivers namespace. The Manila CSI driver enables you to create and mount Manila PVs. The driver is installed in the openshift-manila-csi-driver namespace. 5.16.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.16.3. Manila CSI Driver Operator limitations The following limitations apply to the Manila Container Storage Interface (CSI) Driver Operator: Only NFS is supported OpenStack Manila supports many network-attached storage protocols, such as NFS, CIFS, and CEPHFS, and these can be selectively enabled in the OpenStack cloud. The Manila CSI Driver Operator in OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform. Snapshots are not supported if the back end is CephFS-NFS To take snapshots of persistent volumes (PVs) and revert volumes to snapshots, you must ensure that the Manila share type that you are using supports these features. A Red Hat OpenStack administrator must enable support for snapshots ( share type extra-spec snapshot_support ) and for creating shares from snapshots ( share type extra-spec create_share_from_snapshot_support ) in the share type associated with the storage class you intend to use. FSGroups are not supported Since Manila CSI provides shared file systems for access by multiple readers and multiple writers, it does not support the use of FSGroups. This is true even for persistent volumes created with the ReadWriteOnce access mode. It is therefore important not to specify the fsType attribute in any storage class that you manually create for use with Manila CSI Driver. Important In Red Hat OpenStack Platform 16.x and 17.x, the Shared File Systems service (Manila) with CephFS through NFS fully supports serving shares to OpenShift Container Platform through the Manila CSI. However, this solution is not intended for massive scale. Be sure to review important recommendations in CephFS NFS Manila-CSI Workload Recommendations for Red Hat OpenStack Platform . 5.16.4. Dynamically provisioning Manila CSI volumes OpenShift Container Platform installs a storage class for each available Manila share type. The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plugin. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests. You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with OpenShift Container Platform on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition. Note Manila service is optional. If the service is not enabled in Red Hat OpenStack Platform (RHOSP), the Manila CSI driver is not installed and the storage classes for Manila are not created. Prerequisites RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform. Procedure (UI) To dynamically create a Manila CSI volume using the web console: In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the appropriate storage class. Enter a unique name for the storage claim. Select the access mode to specify read and write access for the PVC you are creating. Important Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. Procedure (CLI) To dynamically create a Manila CSI volume using the command-line interface (CLI): Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-manila.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2 1 Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. 2 The name of the storage class that provisions the storage back end. Manila storage classes are provisioned by the Operator and have the csi-manila- prefix. Create the object you saved in the step by running the following command: USD oc create -f pvc-manila.yaml A new PVC is created. To verify that the volume was created and is ready, run the following command: USD oc get pvc pvc-manila The pvc-manila shows that it is Bound . You can now use the new PVC to configure a pod. Additional resources Configuring CSI volumes 5.17. Red Hat Virtualization CSI Driver Operator 5.17.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Red Hat Virtualization (RHV). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to RHV storage assets, OpenShift Container Platform installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the openshift-cluster-csi-drivers namespace. The oVirt CSI Driver Operator provides a default StorageClass object that you can use to create Persistent Volume Claims (PVCs). The oVirt CSI driver enables you to create and mount oVirt PVs. 5.17.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Note The oVirt CSI driver does not support snapshots. 5.17.3. Red Hat Virtualization (RHV) CSI driver storage class OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes. To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML: ovirt-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: "<boolean>" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: "<boolean>" 7 csi.storage.k8s.io/fstype: <file_system_type> 8 1 Name of the storage class. 2 Set to false if the storage class is the default storage class in the cluster. If set to true , the existing default storage class must be edited and set to false . 3 true enables dynamic volume expansion, false prevents it. true is recommended. 4 Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete . 5 Indicates how to provision and bind PersistentVolumeClaims . When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature. 6 The RHV storage domain name to use. 7 If true , the disk is thin provisioned. If false , the disk is preallocated. Thin provisioning is recommended. 8 Optional: File system type to be created. Possible values: ext4 (default) or xfs . 5.17.4. Creating a persistent volume on RHV When you create a PersistentVolumeClaim (PVC) object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. Prerequisites You are logged in to a running OpenShift Container Platform cluster. You provided the correct RHV credentials in ovirt-credentials secret. You have installed the oVirt CSI driver. You have defined at least one storage class. Procedure If you are using the web console to dynamically create a persistent volume on RHV: In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the appropriate StorageClass object, which is ovirt-csi-sc by default. Enter a unique name for the storage claim. Select the access mode. Currently, RWO (ReadWriteOnce) is the only supported access mode. Define the size of the storage claim. Select the Volume Mode: Filesystem : Mounted into pods as a directory. This mode is the default. Block : Block device, without any file system on it Click Create to create the PersistentVolumeClaim object and generate a PersistentVolume object. If you are using the command-line interface (CLI) to dynamically create a RHV CSI volume: Create and save a file with the PersistentVolumeClaim object described by the following sample YAML: pvc-ovirt.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3 1 Name of the required storage class. 2 Volume size in GiB. 3 Supported options: Filesystem : Mounted into pods as a directory. This mode is the default. Block : Block device, without any file system on it. Create the object you saved in the step by running the following command: To verify that the volume was created and is ready, run the following command: The pvc-ovirt shows that it is Bound. Additional resources Configuring CSI volumes Dynamic Provisioning 5.18. VMware vSphere CSI Driver Operator 5.18.1. Overview OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OpenShift Container Platform installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers namespace. vSphere CSI Driver Operator : The Operator provides a storage class, called thin-csi , that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. vSphere CSI driver : The driver enables you to create and mount vSphere PVs. In OpenShift Container Platform 4.11, the driver version is 2.5.1. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems . Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Note The vSphere CSI Driver supports dynamic and static provisioning. When using static provisioning in the PV specifications, do not use the key storage.kubernetes.io/csiProvisionerIdentity in csi.volumeAttributes because this key indicates dynamically provisioned PVs. 5.18.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.18.3. vSphere storage policy The vSphere CSI Driver Operator storage class uses vSphere's storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: "USDopenshift-storage-policy-xxxx" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete 5.18.4. ReadWriteMany vSphere volume support If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information about configuring the vSAN file service in your environment, see vSAN File Service . You can request RWX volumes by making the following persistent volume claim (PVC): kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service. 5.18.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . 5.18.6. Removing a third-party vSphere CSI Driver Operator OpenShift Container Platform 4.11 includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat. If you have installed a vSphere CSI driver provided by the community or another vendor, which is considered a third-party vSphere CSI driver, and you continue with upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. The instructions outlined in the procedure show how to uninstall a third-party vSphere CSI Driver. Consult the vendor's or community provider's uninstall guide for more detailed instructions on removing the driver and its components. Important When removing a third-party vSphere CSI driver, you do not need to delete the associated persistent volume (PV) objects. Data loss typically does not occur, but Red Hat does not take any responsibility if data loss does occur. After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat's vSphere CSI Operator Driver automatically resumes. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat's vSphere CSI Operator Driver. Procedure Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects. Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver. Delete the third-party vSphere CSI driver CSIDriver object: ~ USD oc delete CSIDriver csi.vsphere.vmware.com csidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat's vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OpenShift Container Platform 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat's vSphere CSI Driver Operator. 5.18.7. Additional resources Configuring CSI volumes | [
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF",
"oc new-app mysql-persistent",
"--> Deploying template \"openshift/mysql-persistent\" to project default",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s",
"kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar",
"oc create -f my-csi-app.yaml",
"oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF",
"oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF",
"oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF",
"create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete",
"oc create -f volumesnapshotclass.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2",
"oc create -f volumesnapshot-dynamic.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1",
"oc create -f volumesnapshot-manual.yaml",
"oc describe volumesnapshot mysnap",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi",
"oc get volumesnapshotcontent",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1",
"oc delete volumesnapshot <volumesnapshot_name>",
"volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted",
"oc delete volumesnapshotcontent <volumesnapshotcontent_name>",
"oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f pvc-restore.yaml",
"oc get pvc",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1",
"oc create -f pvc-clone.yaml",
"oc get pvc pvc-1-clone",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: CustomNoUpgrade customNoUpgrade: enabled: - CSIMigrationAWS 1",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com",
"2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-",
"oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml",
"oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml",
"secret/aws-efs-cloud-credentials created",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi",
"apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3",
"oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created",
"oc get clustercsidriver efs.csi.aws.com -o yaml",
"oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition",
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF",
"oc get storageclass",
"oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1",
"oc edit machineset <machine-set-name>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2",
"oc create -f <machine-set-name>.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3",
"apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1",
"oc describe storageclass csi-gce-pd-cmek",
"Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi",
"oc apply -f pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f cinder-claim.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2",
"oc create -f pvc-manila.yaml",
"oc get pvc pvc-manila",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3",
"oc create -f pvc-ovirt.yaml",
"oc get pvc pvc-ovirt",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"~ USD oc delete CSIDriver csi.vsphere.vmware.com",
"csidriver.storage.k8s.io \"csi.vsphere.vmware.com\" deleted"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/storage/using-container-storage-interface-csi |
Chapter 54. Authentication and Interoperability | Chapter 54. Authentication and Interoperability A crash is reported after an unsuccessful lightweight CA key retrieval When using Identity Management (IdM), if retrieving the lightweight certificate authority (CA) key fails for some reason, the operation terminates unexpectedly with an uncaught exception. The exception results in a crash report. (BZ# 1478366 ) OpenLDAP causes programs to fail immediately in case of incorrect configuration Previously, the Mozilla implementation of Network Security Services (Mozilla NSS) silently ignored certain misconfigurations in the OpenLDAP suite, which caused programs to fail only on connection establishment. With this update, OpenLDAP has switched from Mozilla NSS to OpenSSL (see the release note for BZ# 1400578 ). With OpenSSL, the TLS context is established immediately, and therefore programs fail immediately. This behavior prevents potential security risks, such as keeping non-working TLS ports open. To work around this problem, verify and fix your OpenLDAP configuration. (BZ#1515833) OpenLDAP reports failures when CACertFile or CACertDir point to an invalid location Previously, if the CACertFile or CACertDir options pointed to an unreadable or otherwise unloadable location, the Mozilla implementation of Network Security Services (Mozilla NSS) did not necessarily consider it a misconfiguration. With this update, the OpenLDAP suite has switched from Mozilla NSS to OpenSSL (see the release note for BZ# 1400578 ). With OpenSSL, if CACertFile or CACertDir point to such an invalid location, the problem is no longer silently ignored. To avoid the failures, remove the misconfigured option, or make sure it points to a loadable location. Additionally, OpenLDAP now applies stricter rules for the contents of the directory to which CACertDir points. If you experience errors when using certificates in this directory, it is possible the directory is in an inconsistent state. To fix this problem, run the cacertdir_rehash command on the folder. For details on CACertFile and CACertDir, see these man pages: ldap.conf(5), slapd.conf(5), slapd-config(5), and ldap_set_option(3). (BZ# 1515918 , BZ#1515839) OpenLDAP does not update TLS configuration after inconsistent changes in cn=config With this update, OpenLDAP has switched from the Mozilla implementation of Network Security Services (Mozilla NSS) to OpenSSL (see the release note for BZ# 1400578 ). With OpenSSL, inconsistent changes of the TLS configuration in the cn=config database break the TLS protocol on the server, and configuration is not updated as expected. To avoid this problem, use only one change record to update the TLS configuration in cn=config . See the ldif(5) man page for a definition of a change record. (BZ#1524193) Identity Management terminates connections unexpectedly Due to a bug in Directory Server, Identity Management (IdM) terminates connections unexpectedly after a certain amount of time, and authentication fails with the following error: The problem occurs if you installed IdM on Red Hat Enterprise Linux 7.5 from an offline media. To work around the problem, run yum update to receive the updated 389-ds-base package which fixes the problem. (BZ# 1544477 ) Directory Server can terminate unexpectedly during shutdown Directory Server uses the nunc-stans framework to manage connection events. If a connection is closed when shutting down the server, a nunc-stans job can access a freed connection structure. As a consequence, Directory Server can terminate unexpectedly. Because this situation occurs in a late state of the shutdown process, data is not corrupted or lost. Currently, no workaround is available. (BZ# 1517383 ) | [
"kinit: Generic error (see e-text) while getting initial credentials"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/known_issues_authentication_and_interoperability |
7.109. libcgroup | 7.109. libcgroup 7.109.1. RHBA-2013:0452 - libcgroup bug fix and enhancement update Updated libcgroup packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The libcgroup packages provide tools and libraries to control and monitor control groups. Bug Fixes BZ#773544 Previously, the cgrulesengd daemon ignored the "--sticky" option of the cgexec command and, as a consequence, moved a process to another cgroup when the process called the setuid() or setgid() functions even if the process had to be stuck to the current cgroup. This bug is now fixed and the cgrulesengd daemon now checks whether the process is "sticky" or not when the process calls setuid or setgid. BZ# 819137 Previously, the lscgroup command dropped the first character of a path unless prefixed with a slash, which led to lscgroup generating invalid paths. This bug is now fixed and the generated paths are now correct. BZ#849757 Previously, adding a cgroup after the cgrulesengd daemon had started did not work. As a consequence, if a directory was created after cgrulesengd was already started, any /etc/cgrules.conf configuration for that directory would not be processed. With this update, a routine has been added to scan the cgrules.conf file and move matching running tasks in the /proc/pid/ directory into configured cgroups. This new routine is called at init time and also after inotify events on cgroups. With this update, a routine has been added to scan the cgrules.conf file and move matching running tasks into configured cgroups. BZ# 869990 Previously, the cgconfig service was not working properly with read-only file systems. As a consequence, cgconfig was not able to start with the default configuration on a Red Hat Enterprise Virtualization Hypervisor system. This update adds a check for the read-only file systems to the cgconfig service and it now works as expected with the default configuration on Red Hat Enterprise Virtualization Hypervisor systems. Enhancement BZ#738737 This update improves the logging facility and error messages generated by libcgroup. Users of libcgroup are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libcgroup |
Chapter 14. Using bound service account tokens | Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optionally, set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Warning If you update the serviceAccountIssuer field and there are bound tokens already in use, all bound tokens with the issuer value will be invalidated. Unless the holder of a bound token has explicit support for a change in issuer, the holder will not request a new bound token until pods have been restarted. If necessary, you can manually restart all pods in the cluster so that the holder will request a new bound token. Before doing this, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Optional: Manually restart all pods in the cluster so that the holder will request a new bound token. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4 1 A reference to an existing service account. 2 The path relative to the mount point of the file to project the token into. 3 Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 4 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. | [
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4",
"oc create -f pod-projected-svc-token.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/bound-service-account-tokens |
Chapter 3. Alternative provisioning network methods | Chapter 3. Alternative provisioning network methods This section contains information about other methods that you can use to configure the provisioning network to accommodate routed spine-leaf with composable networks. 3.1. VLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VLAN tunnel across the L3 topology. For more information, see Figure 3.1, "VLAN provisioning network topology" . If you use a VLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, trunk a VLAN between the Top-of-Rack (ToR) leaf switches. In the following diagram, the StorageLeaf networks are presented to the Ceph storage and Compute nodes; the NetworkLeaf represents an example of any network that you want to compose. Figure 3.1. VLAN provisioning network topology 3.2. VXLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VXLAN tunnel to span across the layer 3 topology. For more information, see Figure 3.2, "VXLAN provisioning network topology" . If you use a VXLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, configure VXLAN endpoints on the Top-of-Rack (ToR) leaf switches. Figure 3.2. VXLAN provisioning network topology | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/spine_leaf_networking/assembly_alternative-provisioning-network-methods |
8.255. virt-who | 8.255. virt-who 8.255.1. RHBA-2014:1513 - virt-who bug fix and enhancement update Updated virt-who package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The virt-who package provides an agent that collects information about virtual guests present in the system and reports them to the Red Hat Subscription Manager tool. This update also fixes the following bugs: Note The virt-who package has been upgraded to upstream version 0.10, which provides a number of bug fixes and enhancements over the version. This update includes support for multiple vCenter servers, fixed querying by cluster in large ESX environments, corrected communication with Red Hat Satellite server when ESXi has no host, fixed unregistering from Subscription Asset Manager (SAM) server, fixed bug in Virtual Desktop and Server Management (VDSM) mode, support for encrypted credentials, and fixed error when creating new VMs. (BZ# 1002640 , BZ# 994575 , BZ# 1002447 , BZ# 1009230 , BZ# 1011877 , BZ# 1017056 , BZ# 1081286 , BZ# 1082416 ) This update also fixes the following bugs: Bug Fixes BZ# 1098019 Previously, the virt-who daemon did not report guest attributes to the server, which disabled the virt_guest_limit feature. With this update, virt-who has been modified to correctly report guest attributes. As a result, virt_guest_limit is now supported by virt-who. BZ# 1113938 Prior to this update, every call to Libvirtd.listDomains() function from the /usr/share/virt-who/virt/libvirtd/libvirtd.py script opened a new connection to the libvirtd daemon but did not close it. Consequently, after several iterations, virt-who consumed all connections allowed for any client of libvirtd. With this update, Libvirtd.listDomains() has been modified to properly close the livirtd connections, thus fixing this bug. The virt-who package has been upgraded to upstream version 0.10, which provides a number of bug fixes and enhancements over the version. This update includes support for multiple vCenter servers, fixed querying by cluster in large ESX environments, corrected communication with Red Hat Satellite server when ESXi has no host, fixed unregistering from Subscription Asset Manager (SAM) server, fixed bug in Virtual Desktop and Server Management (VDSM) mode, support for encrypted credentials, and fixed error when creating new VMs. (BZ#1002640, BZ#994575, BZ#1002447, BZ#1009230, BZ#1011877, BZ#1017056, BZ#1081286, BZ#1082416) Users of virt-who are advised to upgrade to this updated package, which fixes these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/virt-who |
Chapter 1. Configuring identity stores | Chapter 1. Configuring identity stores 1.1. Creating an aggregate realm 1.1.1. Aggregate realm in Elytron With an aggregate realm, aggregate-realm , you can use one security realm for authentication and another security realm, or an aggregation of multiple security realms, for authorization in Elytron. For example, you can configure an aggregate-realm to use an ldap-realm for authentication and aggregation of a filesystem-realm and an ldap-realm for authorization. An identity is created in an aggregate realm configured with multiple authorization realms as follows: Attribute values from each authorization realm are loaded. If an attribute is defined in more than one authorization realm, the value of the first occurrence of the attribute is used. The following example illustrates how identity is created when multiple authorization realms contain definitions for the same identity attribute. Example aggregate realm configuration In the example, the configured aggregate-realm references two existing security realms: "exampleLDAPRealm", which is an LDAP realm, and "exampleFilesystemRealm", which is a filesystem realm. Attribute values obtained from the LDAP realm: Attribute values obtained from the filesystem realm: Resulting identity obtained from the aggregate realm: The example aggregate-realm uses the value for the attribute mail defined in the LDAP realm because the LDAP realm is referenced before the filesystem realm. Additional resources Creating an aggregate-realm in Elytron 1.1.2. Examples of creating security realms required for an aggregate realm The following examples illustrate creating ldap-realm and filesystem-realm . You can reference these security realms in an aggregate-realm . 1.1.2.1. Creating an ldap-realm in Elytron example Create an Elytron security realm backed by a Lightweight Directory Access Protocol (LDAP) identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. For the examples in this procedure, the following LDAP Data Interchange Format (LDIF) is used: The LDAP connection parameters used for the example are as follows: LDAP URL: ldap://10.88.0.2 LDAP admin password: secret You need this for Elytron to connect with the LDAP server. LDAP admin Distinguished Name (DN): (cn=admin,dc=wildfly,dc=org) LDAP organization: wildfly If no organization name is specified, it defaults to Example Inc . LDAP domain: wildfly.org This is the name that is matched when the platform receives an LDAP search reference. Prerequisites You have configured an LDAP identity store. JBoss EAP is running. Procedure Configure a directory context that provides the URL and the principal used to connect to the LDAP server. Example Create an LDAP realm that references the directory context. Specify the Search Base DN and how users are mapped. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm , or aggregate-realm . 1.1.2.2. Creating a filesystem-realm in Elytron example Create an Elytron security realm backed by a file system-based identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. Procedure Create a filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set roles for the user. Syntax Example Set attributes for the user. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm , or aggregate-realm . 1.1.3. Creating an aggregate-realm in Elytron Create an aggregate-realm in Elytron that uses one security realm for authentication and aggregation of multiple security realms for authorization. Use the aggregate-realm to create a security domain to add authentication and authorization to management interfaces and deployed applications. Prerequisites JBoss EAP is running. You have created the realms to reference from the aggregate realm. Procedure Create an aggregate-realm from existing security realms. Syntax Example Create a role decoder to map attributes to roles. Syntax Example Create a security domain that references the aggregate-realm and the role decoder. Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources aggregate-realm attributes security-domain attributes simple-role-decoder attributes 1.2. Creating a caching realm 1.2.1. Caching realm in Elytron Elytron provides caching-realm to cache the results of a credential lookup from a security realm. The caching-realm caches the PasswordCredential credential using a LRU or Least Recently Used caching strategy, in which the least accessed entries are discarded when maximum number of entries is reached. You can use a caching-realm with the following security realms: filesystem-realm jdbc-realm ldap-realm a custom security realm If you make changes to your credential source outside of JBoss EAP, those changes are only propagated to a JBoss EAP caching realm if the underlying security realm supports listening. Only ldap-realm supports listening. However, filtered attributes, such as roles , inside the ldap-realm do not support listening. To ensure that your caching realm has a correct cache of user data, ensure the following: Clear the caching-realm cache after you modify the user attributes at your credential source. Modify your user attributes through the caching realm rather than at your credential source. Important Making user changes through a caching realm is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Additional resources Creating a caching-realm in Elytron caching-realm attributes Clearing the caching-realm cache 1.2.2. Creating a caching-realm in Elytron Create a caching-realm and a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Note An ldap-realm configured as caching realm does not support Active Directory. For more information, see Changing LDAP/AD User Password via JBossEAP CLI for Elytron . Prerequisites You have configured the security realm to cache. Procedure Create a caching-realm that references the security realm to cache. Syntax Example Create a security domain that references the caching-realm . Syntax Example Verification To verify that Elytron can load data from the security realm referenced in the caching-realm into the caching-realm , use the following command: Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources caching-realm attributes Clearing the caching-realm cache security-domain attributes 1.2.3. Clearing the caching-realm cache Clearing a caching-realm cache forces Elytron to re-populate the cache by using the latest data from the security realm, which Elytron is configured to cache. Prerequisites A caching-realm is configured. Procedure Clear the caching-realm cache. Syntax Example Additional resources caching-realm attributes 1.3. Creating a distributed realm 1.3.1. Distributed realm in Elytron With a distributed realm, you can search across different identity stores by referencing existing security realms. The identity obtained is used for both authentication and authorization. Elytron invokes the security realms in a distributed realm in the order that you define them in the distributed-realm resource. Example distributed-realm configuration In the example, the configured distributed-realm references two existing security realms: "exampleLDAPRealm", which is an LDAP realm, and "exampleFilesystemRealm", which is a filesystem realm. Elytron searches the referenced security realms sequentially as follows: Elytron first searches the LDAP realm for a matching identity. If Elytron finds a match, the authentication succeeds. If Elytron does not find a match, it searches the filesystem realm. By default, in case the connection to any identity store fails before an identity is matched, the authentication fails with an exception RealmUnavailableException and no more realms are searched. You can change this behavior by setting the attribute ignore-unavailable-realms to true . If the connection to an identity store fails when ignore-unavailable-realms is set to true , Elytron continues to search the remaining realms. When ignore-unavailable-realms is set to true , emit-events is by default set to true , so a SecurityEvent is emitted in case any of the queried realms is unavailable. You can turn this off by setting emit-events to false . Additional resources Creating a distributed-realm in Elytron 1.3.2. Examples of creating security realms required for a distributed realm The following examples illustrate creating ldap-realm and filesystem-realm . You can reference these security realms in a distributed-realm . 1.3.2.1. Creating an ldap-realm in Elytron example Create an Elytron security realm backed by a Lightweight Directory Access Protocol (LDAP) identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. For the examples in this procedure, the following LDAP Data Interchange Format (LDIF) is used: The LDAP connection parameters used for the example are as follows: LDAP URL: ldap://10.88.0.2 LDAP admin password: secret You need this for Elytron to connect with the LDAP server. LDAP admin Distinguished Name (DN): (cn=admin,dc=wildfly,dc=org) LDAP organization: wildfly If no organization name is specified, it defaults to Example Inc . LDAP domain: wildfly.org This is the name that is matched when the platform receives an LDAP search reference. Prerequisites You have configured an LDAP identity store. JBoss EAP is running. Procedure Configure a directory context that provides the URL and the principal used to connect to the LDAP server. Example Create an LDAP realm that references the directory context. Specify the Search Base DN and how users are mapped. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm or aggregate-realm . You can also configure a caching-realm for the ldap-realm to cache the result of lookup and improve performance. 1.3.2.2. Creating a filesystem-realm in Elytron example Create an Elytron security realm backed by a file system-based identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. Procedure Create a filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set a password for the user. Syntax Example Set roles for the user. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm , or aggregate-realm . 1.3.3. Creating a distributed-realm in Elytron Create a distributed-realm in Elytron that references existing security realms to search for an identity. Use the distributed-realm to create a security domain to add authentication and authorization to management interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. You have created the realms to reference in the distributed-realm . Procedure Create a distributed-realm referencing existing security realms. Syntax Example Create a role decoder to map attributes to roles. Syntax Example Create a security domain that references the distributed-realm and the role decoder. Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources distributed-realm attributes security-domain attributes simple-role-decoder attributes 1.4. Creating a failover realm 1.4.1. Failover realm in Elytron You can configure a failover security realm, failover-realm , in Elytron that references two existing security realms so that in case one security realm fails, Elytron uses the other as a backup. A failover-realm in Elytron references two security realms: delegate-realm : The primary security realm to use. failover-realm : The security realm to use as the backup. Example failover-realm configuration In the example, exampleLDAPRealm , which is an ldap-realm , is used as the delegate realm and exampleFileSystemRealm , which is a filesystem-realm is used as the failover-realm . In the case that the ldap-realm fails, Elytron will use the filesystem-realm for authentication and authorization. Note In a failover-realm , the failover-realm is invoked only when the delegate-realm fails. The fail-over realm is not invoked if the connection to the delegate-realm succeeds but the required identity is not found. To search for identity across multiple security realms, use the distributed-realm . 1.4.2. Examples of creating security realms required for a failover realm The following examples illustrate creating ldap-realm and filesystem-realm . You can reference these security realms in a failover-realm . 1.4.2.1. Creating an ldap-realm in Elytron example Create an Elytron security realm backed by a Lightweight Directory Access Protocol (LDAP) identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. For the examples in this procedure, the following LDAP Data Interchange Format (LDIF) is used: The LDAP connection parameters used for the example are as follows: LDAP URL: ldap://10.88.0.2 LDAP admin password: secret You need this for Elytron to connect with the LDAP server. LDAP admin Distinguished Name (DN): (cn=admin,dc=wildfly,dc=org) LDAP organization: wildfly If no organization name is specified, it defaults to Example Inc . LDAP domain: wildfly.org This is the name that is matched when the platform receives an LDAP search reference. Prerequisites You have configured an LDAP identity store. JBoss EAP is running. Procedure Configure a directory context that provides the URL and the principal used to connect to the LDAP server. Example Create an LDAP realm that references the directory context. Specify the Search Base DN and how users are mapped. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm or aggregate-realm . You can also configure a caching-realm for the ldap-realm to cache the result of lookup and improve performance. 1.4.2.2. Creating a filesystem-realm in Elytron example Create an Elytron security realm backed by a file system-based identity store to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. Procedure Create a filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set a password for the user. Syntax Example Set roles for the user. Syntax Example You can now use this realm to create a security domain or to combine with another realm in failover-realm , distributed-realm , or aggregate-realm . 1.4.3. Creating a failover-realm in Elytron Create a failover security realm in Elytron that references existing security realms as a delegate realm, the default realm to use, and a failover realm. Elytron uses the configured failover realm in case the delegate realm fails. Use the security realm to create a security domain to add authentication and authorization to management interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. You have created the realms to use as the delegate and failover realm. Procedure Create a failover-realm from existing security realms. Syntax Example Create a role decoder to map attributes to roles. Syntax Example Create a security domain that references the failover-realm and the role decoder. Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources failover-realm attributes security-domain attributes simple-role-decoder attributes 1.5. Creating a JAAS realm 1.5.1. JAAS realm in Elytron The Java Authentication and Authorization Service (JAAS) realm, jaas-realm , is a security realm that you can use to configure custom login modules in the elytron subsystem for credential verification of users and assigning users roles. You can use jaas-realm for securing both JBoss EAP management interfaces and the deployed applications. The JAAS realm verifies user credentials by initializing a javax.security.auth.login.LoginContext , which uses login modules specified in the JAAS configuration file. A login module is an implementation of javax.security.auth.login.LoginContext.LoginModule interface. Add these implementations as a JBoss EAP module to your server and specify them in the JAAS configuration file. Example of JAAS configuration file 1 Name of the entry that you use when configuring the jaas-realm . 2 Login module with its optional flags. You can use all the flags defined by JAAS. For more information, see JAAS Login Configuration File in the Oracle Java SE documentation. 3 Login module with its optional flags and options. Subject's principals to attributes mapping and roles association in login modules You can add attributes to identities obtained from login modules by utilizing a subject 's principals. A subject is the user being authenticated and principals are identifiers, such as the user name, contained within a subject. Elytron obtains and maps identities as follows: Login modules use javax.security.auth.Subject to represent the user, subject , being authenticated. A subject can have multiple instances of java.security.Principal , principal , associated with it. Elytron uses org.wildfly.security.auth.server.SecurityIdentity to represent authenticated users. Elytron maps subject to SecurityIdentity . A subject's principal s are mapped to security identity's attributes with the following rule: The key of the attribute is principal 's simple class name, obtained by principal.getClass().getSimpleName() call. The value is the principal 's name, obtained by principal.getName() call. For principal s of the same type, the values are appended to the collection under the attribute key. Additional resources Developing custom JAAS login modules Creating a jaas-realm in Elytron Subject class Javadocs Principal class Javadocs 1.5.2. Developing custom JAAS login modules You can create custom Java Authentication and Authorization Service (JAAS) login modules to implement custom authentication and authorization functionality. You can use the custom JAAS login modules through the jaas-realm in the Elytron subsystem to secure JBoss EAP management interfaces and deployed applications. The login modules are not part of a deployment, you include them as JBoss EAP modules. Note The following procedures are provided as an example only. If you already have an application that you want to secure, you can skip these and go directly to Adding authentication and authorization to applications . 1.5.2.1. Creating a Maven project for JAAS login module development For creating custom Java Authentication and Authorization Service (JAAS) login modules, create a Maven project with the required dependencies and directory structure. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Use the mvn command in the CLI to set up a Maven project. This command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory. Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>custom.loginmodules</groupId> <artifactId>custom-login-modules</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron</artifactId> <version>1.17.2.Final</version> </dependency> <dependency> <groupId>jakarta.security.enterprise</groupId> <artifactId>jakarta.security.enterprise-api</artifactId> <version>3.0.0</version> </dependency> </dependencies> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> </project> Remove the directories site and test because they are not required for this example. Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create custom JAAS login modules. 1.5.2.2. Creating custom JAAS login modules Create a custom Java Authentication and Authorization Service (JAAS) login module by creating a class that implements the javax.security.auth.spi.LoginModule interface. Additionally, create JAAS configuration file with flags and options for the custom login module. In this procedure, <application_home> refers to the directory that contains the pom.xml configuration file for the application. Prerequisites You have created a Maven project. For more information, see Creating a Maven project for JAAS login module development . Procedure Create a directory to store the Java files. Syntax Example Navigate to the directory containing the source files. Syntax Example Delete the generated file App.java . Create a file ExampleCustomLoginModule.java for custom login module source. package com.example.loginmodule; import org.wildfly.security.auth.principal.NamePrincipal; import javax.security.auth.Subject; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.NameCallback; import javax.security.auth.callback.PasswordCallback; import javax.security.auth.callback.UnsupportedCallbackException; import javax.security.auth.login.LoginException; import javax.security.auth.spi.LoginModule; import java.io.IOException; import java.security.Principal; import java.util.Arrays; import java.util.HashMap; import java.util.Map; public class ExampleCustomLoginModule implements LoginModule { private final Map<String, char[]> usersMap = new HashMap<String, char[]>(); private Principal principal; private Subject subject; private CallbackHandler handler; /** * In this example, identities are created as fixed Strings. * * The identities are: * user1 has the password passwordUser1 * user2 has the password passwordUser2 * * Use these credentials when you secure management interfaces * or applications with this login module. * * In a production login module, you would get the identities * from a data source. * */ @Override public void initialize(Subject subject, CallbackHandler callbackHandler, Map<String, ?> sharedState, Map<String, ?> options) { this.subject = subject; this.handler = callbackHandler; this.usersMap.put("user1", "passwordUser1".toCharArray()); this.usersMap.put("user2", "passwordUser2".toCharArray()); } @Override public boolean login() throws LoginException { // obtain the incoming username and password from the callback handler NameCallback nameCallback = new NameCallback("Username"); PasswordCallback passwordCallback = new PasswordCallback("Password", false); Callback[] callbacks = new Callback[]{nameCallback, passwordCallback}; try { this.handler.handle(callbacks); } catch (UnsupportedCallbackException | IOException e) { throw new LoginException("Error handling callback: " + e.getMessage()); } final String username = nameCallback.getName(); this.principal = new NamePrincipal(username); final char[] password = passwordCallback.getPassword(); char[] storedPassword = this.usersMap.get(username); if (!Arrays.equals(storedPassword, password)) { throw new LoginException("Invalid password"); } else { return true; } } /** * user1 is assigned the roles Admin, User and Guest. * In a production login module, you would get the identities * from a data source. * */ @Override public boolean commit() throws LoginException { if (this.principal.getName().equals("user1")) { this.subject.getPrincipals().add(new Roles("Admin")); this.subject.getPrincipals().add(new Roles("User")); this.subject.getPrincipals().add(new Roles("Guest")); } return true; } @Override public boolean abort() throws LoginException { return true; } @Override public boolean logout() throws LoginException { this.subject.getPrincipals().clear(); return true; } /** * Principal with simple classname 'Roles' will be mapped to the identity's attribute with name 'Roles'. */ private static class Roles implements Principal { private final String name; Roles(final String name) { this.name = name; } /** * @return name of the principal. This will be added as a value to the identity's attribute which has a name equal to the simple name of this class. In this example, this value will be added to the attribute with a name 'Roles'. */ public String getName() { return this.name; } } } In the <application_home> directory, create JAAS configuration file JAAS-login-modules.conf . exampleConfiguration is the Entry name. com.example.loginmodule.ExampleCustomLoginModule is the login module. optional is the flag. Compile the login module. You can now use the login module to secure JBoss EAP management interfaces and deployed applications. 1.5.3. Creating a jaas-realm in Elytron Create an Elytron security realm backed by Java Authentication and Authorization Service (JAAS)-compatible custom login module to secure JBoss EAP server interfaces or deployed applications. Use the security realm to create a security domain. Prerequisites You have packaged custom login modules as JAR. For an example login module, see Developing custom JAAS login modules . JBoss EAP is running. Procedure Add the login module JAR to JBoss EAP as a module using the management CLI. Syntax Example Create jaas-realm from the login module and the JAAS login configuration file. Syntax Example Create a security domain that references the jaas-realm . Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources module command arguments jaas-realm attributes security-domain attributes | [
"/subsystem=elytron/aggregate-realm=exampleSecurityRealm:add(authentication-realm=exampleLDAPRealm,authorization-realms=[exampleLDAPRealm,exampleFileSystemRealm])",
"mail: [email protected] telephoneNumber: 0000 0000",
"mail: [email protected] website: http://www.example.com/",
"mail: [email protected] telephoneNumber: 0000 0000 website: http://www.example.com/",
"dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: passwordUser1 mail: [email protected] telephoneNumber: 0000 0000 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org",
"/subsystem=elytron/dir-context= <dir_context_name> :add(url=\" <LDAP_URL> \",principal=\" <principal_distinguished_name> \",credential-reference= <credential_reference> )",
"/subsystem=elytron/dir-context=exampleDirContext:add(url=\"ldap://10.88.0.2\",principal=\"cn=admin,dc=wildfly,dc=org\",credential-reference={clear-text=\"secret\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/ldap-realm= <ldap_realm_name> add:(dir-context= <dir_context_name> ,identity-mapping=search-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",rdn-identifier=\" <relative_distinguished_name_identifier> \",user-password-mapper={from= <password_attribute_name> },attribute-mapping=[{filter-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",filter=\" <ldap_filter> \",from=\" <ldap_attribute_name> \",to=\" <identity_attribute_name> \"}]})",
"/subsystem=elytron/ldap-realm=exampleLDAPRealm:add(dir-context=exampleDirContext,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={1}))\",from=\"cn\",to=\"Roles\"},{from=\"mail\",to=\"mail\"},{from=\"telephoneNumber\",to=\"telephoneNumber\"}]}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity(identity=user1) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> ,name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity-attribute(identity=user1, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> ,name= <attribute_name> , value=[ <attribute_value> ])",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity-attribute(identity=user1, name=mail, value=[\"[email protected]\"]) /subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity-attribute(identity=user1, name=website, value=[\"http://www.example.com/\"])",
"/subsystem=elytron/aggregate-realm= <aggregate_realm_name> :add(authentication-realm= <security_realm_for_authentication> , authorization-realms=[ <security_realm_for_authorization_1> , <security_realm_for_authorization_2> ,..., <security_realm_for_authorization_N> ])",
"/subsystem=elytron/aggregate-realm=exampleSecurityRealm:add(authentication-realm=exampleLDAPRealm,authorization-realms=[exampleLDAPRealm,exampleFileSystemRealm]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/simple-role-decoder= <role_decoder_name> :add(attribute= <attribute> )",
"/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <aggregate_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <aggregate_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm,role-decoder=\"from-roles-attribute\"}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/caching-realm= <caching_realm_name> :add(realm= <realm_to_cache> )",
"/subsystem=elytron/caching-realm=exampleSecurityRealm:add(realm=exampleLDAPRealm)",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <caching_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <caching_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [\"Admin\"]}, \"roles\" => [\"Admin\"] } }",
"/subsystem=elytron/caching-realm= <caching_realm_name> :clear-cache",
"/subsystem=elytron/caching-realm=exampleSecurityRealm:clear-cache",
"/subsystem=elytron/distributed-realm=exampleSecurityRealm:add(realms=[exampleLDAPRealm,exampleFilesystemRealm])",
"dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: userPassword1 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org",
"/subsystem=elytron/dir-context= <dir_context_name> :add(url=\" <LDAP_URL> \",principal=\" <principal_distinguished_name> \",credential-reference= <credential_reference> )",
"/subsystem=elytron/dir-context=exampleDirContext:add(url=\"ldap://10.88.0.2\",principal=\"cn=admin,dc=wildfly,dc=org\",credential-reference={clear-text=\"secret\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/ldap-realm= <ldap_realm_name> add:(dir-context= <dir_context_name> ,identity-mapping=search-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",rdn-identifier=\" <relative_distinguished_name_identifier> \",user-password-mapper={from= <password_attribute_name> },attribute-mapping=[{filter-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",filter=\" <ldap_filter> \",from=\" <ldap_attribute_name> \",to=\" <identity_attribute_name> \"}]})",
"/subsystem=elytron/ldap-realm=exampleLDAPRealm:add(dir-context=exampleDirContext,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={1}))\",from=\"cn\",to=\"Roles\"}]}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity(identity=user2) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :set-password(identity= <user_name> , clear={password= <password> })",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:set-password(identity=user2, clear={password=\"passwordUser2\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> , name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity-attribute(identity=user2, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/distributed-realm= <distributed_realm_name> :add(realms=[ <security_realm_1> , <security_realm_2> , ..., <security_realm_N> ])",
"/subsystem=elytron/distributed-realm=exampleSecurityRealm:add(realms=[exampleLDAPRealm, exampleFileSystemRealm]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/simple-role-decoder= <role_decoder_name> :add(attribute= <attribute> )",
"/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(realms=[{realm= <distributed_realm_name> ,role-decoder= <role_decoder_name> }],default-realm= <ldap_realm_name> ,permission-mapper= <permission_mapper> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm,role-decoder=\"from-roles-attribute\"}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/failover-realm=exampleSecurityRealm:add(delegate-realm=exampleLDAPRealm,failover-realm=exampleFileSystemRealm)",
"dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: userPassword1 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org",
"/subsystem=elytron/dir-context= <dir_context_name> :add(url=\" <LDAP_URL> \",principal=\" <principal_distinguished_name> \",credential-reference= <credential_reference> )",
"/subsystem=elytron/dir-context=exampleDirContext:add(url=\"ldap://10.88.0.2\",principal=\"cn=admin,dc=wildfly,dc=org\",credential-reference={clear-text=\"secret\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/ldap-realm= <ldap_realm_name> add:(dir-context= <dir_context_name> ,identity-mapping=search-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",rdn-identifier=\" <relative_distinguished_name_identifier> \",user-password-mapper={from= <password_attribute_name> },attribute-mapping=[{filter-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",filter=\" <ldap_filter> \",from=\" <ldap_attribute_name> \",to=\" <identity_attribute_name> \"}]})",
"/subsystem=elytron/ldap-realm=exampleLDAPRealm:add(dir-context=exampleDirContext,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={1}))\",from=\"cn\",to=\"Roles\"}]}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity(identity=user1) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :set-password(identity= <user_name> , clear={password= <password> })",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:set-password(identity=user1, clear={password=\"passwordUser1\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> ,name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleFileSystemRealm:add-identity-attribute(identity=user1, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/failover-realm= <failover_realm_name> :add(delegate-realm= <realm_to_use_by_default> ,failover-realm= <realm_to_use_as_backup> )",
"/subsystem=elytron/failover-realm=exampleSecurityRealm:add(delegate-realm=exampleLDAPRealm,failover-realm=exampleFileSystemRealm) {\"outcome\" => \"success\"}",
"/subsystem=elytron/simple-role-decoder= <role_decoder_name> :add(attribute= <attribute> )",
"/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <failover_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <failover_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm,role-decoder=\"from-roles-attribute\"}]) {\"outcome\" => \"success\"}",
"test { 1 loginmodules.CustomLoginModule1 optional; 2 loginmodules.CustomLoginModule2 optional myOption1=true myOption2=exampleOption; 3 };",
"mvn archetype:generate -DgroupId= <group-to-which-your-application-belongs> -DartifactId= <name-of-your-application> -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-simple -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.loginmodule -DartifactId=example-custom-login-module -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-simple -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd example-custom-login-module",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>custom.loginmodules</groupId> <artifactId>custom-login-modules</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron</artifactId> <version>1.17.2.Final</version> </dependency> <dependency> <groupId>jakarta.security.enterprise</groupId> <artifactId>jakarta.security.enterprise-api</artifactId> <version>3.0.0</version> </dependency> </dependencies> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> </project>",
"rm -rf src/site/ rm -rf src/test/",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.404 s [INFO] Finished at: 2022-04-28T13:55:18+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p src/main/java/<path_based_on_artifactID>",
"mkdir -p src/main/java/com/example/loginmodule",
"cd src/main/java/<path_based_on_groupID>",
"cd src/main/java/com/example/loginmodule",
"rm App.java",
"package com.example.loginmodule; import org.wildfly.security.auth.principal.NamePrincipal; import javax.security.auth.Subject; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.NameCallback; import javax.security.auth.callback.PasswordCallback; import javax.security.auth.callback.UnsupportedCallbackException; import javax.security.auth.login.LoginException; import javax.security.auth.spi.LoginModule; import java.io.IOException; import java.security.Principal; import java.util.Arrays; import java.util.HashMap; import java.util.Map; public class ExampleCustomLoginModule implements LoginModule { private final Map<String, char[]> usersMap = new HashMap<String, char[]>(); private Principal principal; private Subject subject; private CallbackHandler handler; /** * In this example, identities are created as fixed Strings. * * The identities are: * user1 has the password passwordUser1 * user2 has the password passwordUser2 * * Use these credentials when you secure management interfaces * or applications with this login module. * * In a production login module, you would get the identities * from a data source. * */ @Override public void initialize(Subject subject, CallbackHandler callbackHandler, Map<String, ?> sharedState, Map<String, ?> options) { this.subject = subject; this.handler = callbackHandler; this.usersMap.put(\"user1\", \"passwordUser1\".toCharArray()); this.usersMap.put(\"user2\", \"passwordUser2\".toCharArray()); } @Override public boolean login() throws LoginException { // obtain the incoming username and password from the callback handler NameCallback nameCallback = new NameCallback(\"Username\"); PasswordCallback passwordCallback = new PasswordCallback(\"Password\", false); Callback[] callbacks = new Callback[]{nameCallback, passwordCallback}; try { this.handler.handle(callbacks); } catch (UnsupportedCallbackException | IOException e) { throw new LoginException(\"Error handling callback: \" + e.getMessage()); } final String username = nameCallback.getName(); this.principal = new NamePrincipal(username); final char[] password = passwordCallback.getPassword(); char[] storedPassword = this.usersMap.get(username); if (!Arrays.equals(storedPassword, password)) { throw new LoginException(\"Invalid password\"); } else { return true; } } /** * user1 is assigned the roles Admin, User and Guest. * In a production login module, you would get the identities * from a data source. * */ @Override public boolean commit() throws LoginException { if (this.principal.getName().equals(\"user1\")) { this.subject.getPrincipals().add(new Roles(\"Admin\")); this.subject.getPrincipals().add(new Roles(\"User\")); this.subject.getPrincipals().add(new Roles(\"Guest\")); } return true; } @Override public boolean abort() throws LoginException { return true; } @Override public boolean logout() throws LoginException { this.subject.getPrincipals().clear(); return true; } /** * Principal with simple classname 'Roles' will be mapped to the identity's attribute with name 'Roles'. */ private static class Roles implements Principal { private final String name; Roles(final String name) { this.name = name; } /** * @return name of the principal. This will be added as a value to the identity's attribute which has a name equal to the simple name of this class. In this example, this value will be added to the attribute with a name 'Roles'. */ public String getName() { return this.name; } } }",
"exampleConfiguration { com.example.loginmodule.ExampleCustomLoginModule optional; };",
"mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.321 s [INFO] Finished at: 2022-04-28T14:16:03+05:30 [INFO] ------------------------------------------------------------------------",
"module add --name= <name_of_the_login_moudle> --resources= <path_to_the_login_module_jar> --dependencies=org.wildfly.security.elytron",
"module add --name=exampleLoginModule --resources= <path_to_login_module> /custom-login-modules-1.0.jar --dependencies=org.wildfly.security.elytron",
"/subsystem=elytron/jaas-realm= <jaas_realm_name> :add(entry= <entry-name> ,path= <path_to_module_config_file> ,module= <name_of_the_login_module> ,callback-handler= <name_of_the_optional_callback_handler> )",
"/subsystem=elytron/jaas-realm=exampleSecurityRealm:add(entry=exampleConfiguration,path= <path_to_login_module> /JAAS-login-modules.conf,module=exampleLoginModule)",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <jaas_realm_name> ,realms=[{realm= <jaas_realm_name> }],permission-mapper=default-permission-mapper)",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,realms=[{realm=exampleSecurityRealm}],permission-mapper=default-permission-mapper) {\"outcome\" => \"success\"}"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_multiple_identity_stores/configuring_identity_stores |
Installing on Azure Stack Hub | Installing on Azure Stack Hub OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Azure Stack Hub Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/index |
3.7. Configuring IP Networking from the Kernel Command line | 3.7. Configuring IP Networking from the Kernel Command line When connecting to the root file system on an iSCSI target from an interface, the network settings are not configured on the installed system. For solution of this problem: Install the dracut utility. For information on using dracut , see Red Hat Enterprise Linux System Administrator's Guide Set the configuration using the ip option on the kernel command line: dhcp - DHCP configuration dhpc6 - DHCP IPv6 configuration auto6 - automatic IPv6 configuration on , any - any protocol available in the kernel (default) none , off - no autoconfiguration, static network configuration For example: Set the name server configuration: The dracut utility sets up a network connection and generates new ifcfg files that can be copied to the /etc/sysconfig/network-scripts/ file. | [
"ip<client-IP-number>:[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:<interface>:{dhcp|dhcp6|auto6|on|any|none|off}",
"ip=192.168.180.120:192.168.180.100:192.168.180.1:255.255.255.0::enp1s0:off",
"nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_from_the_kernel_command_line |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 3.5-0 Wed Oct 30 2019 Red Hat Gluster Storage Documentation Team Updated documentation for Red Hat Gluster Storage 3.5 | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/appe-documentation-deployment_guide_for_public_cloud-revision_history |
Managing hybrid and multicloud resources | Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.13 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" For example: Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Data Foundation . Click the Backing Store tab to view all the backing stores. 3.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 3.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For example, For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.4. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Data Foundation . Click the Bucket Class tab and search the new Bucket Class. 3.5. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.6. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> Data Foundation . Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify that the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab -> Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. (Optional) Add description. Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resources. If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway -> Buckets -> Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in the earlier step that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification steps Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> Data Foundation . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.3. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster CRD while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG: Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). Note To give access to certain buckets of MCG accounts, use AWS S3 bucket policies. For more information, see Using bucket policies in AWS documentation. Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (S3, Azure, etc.). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications. Download the Multicloud Object Gateway (MCG) command-line interface: Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: + + "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML Applications that require a Multicloud Object Gateway (MCG) data bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and add the spec.additionalConfig.replicationPolicy parameter to the OBC. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucketclass and define the replication-policy parameter in a JSON file. It is possible to set a bucket class replication policy for two types of bucket classes: Placement Namespace Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. It is possible to pass several backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucket class using the spec.replicationPolicy field. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. It is possible to pass several backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . 8.2.3. Enabling bucket replication deletion When creating a bucket replication policy, you may want to enable deletion so that when data is deleted from one bucket, the data is deleted from the destination bucket as well. This ensures that when data is deleted in one location, the other location has the same dataset. Important This feature requires logs-based replication, which is currently only supported using AWS. For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Bucket Claims . Click Create new Object bucket claim . In the Replication policy section, select the checkbox Sync deletion . Enter the name of the bucket that will contain the logs under Event log Bucket . Enter the prefix for the location of the logs in the logs bucket under Prefix . If the logs are stored in the root of the bucket, you can leave Prefix empty. Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Chapter 11. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 11.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. 11.2. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview -> Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources -> Storage resources -> Resource name . 11.3. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, it is possible to configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, click Installed operators -> ODF Operator . Click on the Backingstore tab. Select the new backingstore . Scroll down and click Edit PV pool resources . In the edit window that appears, edit the value of Mem , CPU , and Vol size based on the requirement. Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. Chapter 12. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . Chapter 13. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . | [
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"oc describe noobaa -n openshift-storage",
"Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa status -n openshift-storage",
"INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.",
"AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"noobaa account create <noobaa-account-name> [flags]",
"noobaa account create testaccount --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore",
"NooBaaAccount spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>",
"noobaa account list NAME DEFAULT_RESOURCE PHASE AGE testaccount noobaa-default-backing-store Ready 1m17s",
"oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001",
"oc get ns <application_namespace> -o yaml | grep scc",
"oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000",
"oc project <application_namespace>",
"oc project testnamespace",
"oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s",
"oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s",
"oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}",
"oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]",
"oc exec -it <pod_name> -- df <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"oc get pv | grep <pv_name>",
"oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s",
"oc get pv <pv_name> -o yaml",
"oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound",
"cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF",
"oc create -f <YAML_file>",
"oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created",
"oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s",
"oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".",
"noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'",
"noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'",
"oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace",
"noobaa account create <user_account> --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'",
"noobaa account create leguser --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'",
"oc exec -it <pod_name> -- mkdir <mount_path> /nsfs",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs",
"noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'",
"noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'",
"oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"noobaa bucket delete <bucket_name>",
"noobaa bucket delete legacy-bucket",
"noobaa account delete <user_account>",
"noobaa account delete leguser",
"noobaa namespacestore delete <nsfs_namespacestore>",
"noobaa namespacestore delete legacy-namespace",
"oc delete pv <cephfs_pv_name>",
"oc delete pvc <cephfs_pvc_name>",
"oc delete pv cephfs-pv-legacy-openshift-storage",
"oc delete pvc cephfs-pvc-legacy",
"oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"oc edit ns <appplication_namespace>",
"oc edit ns testnamespace",
"oc get ns <application_namespace> -o yaml | grep sa.scc.mcs",
"oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF",
"oc create -f scc.yaml",
"oc create serviceaccount <service_account_name>",
"oc create serviceaccount testnamespacesa",
"oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>",
"oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa",
"oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'",
"oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'",
"oc edit dc <pod_name> -n <application_namespace>",
"spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>",
"oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace",
"spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0",
"oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext",
"oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0",
"noobaa account passwd <noobaa_account_name> [options]",
"noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account passwd [email protected]",
"Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"",
"-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***",
"noobaa account list",
"NAME DEFAULT_RESOURCE PHASE AGE account-test noobaa-default-backing-store Ready 14m17s test2 noobaa-default-backing-store Ready 3m12s",
"oc get noobaaaccount",
"NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s",
"noobaa account regenerate <noobaa_account_name> [options]",
"noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account regenerate account-test",
"INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***",
"noobaa obc list",
"NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound",
"oc get obc",
"NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s",
"noobaa obc regenerate <bucket_claim_name> [options]",
"noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa obc regenerate obc-test",
"INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991",
"oc edit noobaa -n openshift-storage noobaa",
"spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"",
"oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--default_resource='']",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: |+ { \"rules\": [ {\"rule_id\":\"rule-1\", \"destination_bucket\":\"first.bucket\" } ] }",
"noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d",
"oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html-single/managing_hybrid_and_multicloud_resources/index |
Power Monitoring | Power Monitoring OpenShift Container Platform 4.15 Configuring and using power monitoring for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: port: 9103 1 nodeSelector: kubernetes.io/os: linux 2 Tolerations: 3 - key: \"\" operator: \"Exists\" value: \"\" effect: \"\"",
"apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler status: exporter: conditions: 1 - lastTransitionTime: '2024-01-11T11:07:39Z' message: Reconcile succeeded observedGeneration: 1 reason: ReconcileSuccess status: 'True' type: Reconciled - lastTransitionTime: '2024-01-11T11:07:39Z' message: >- Kepler daemonset \"kepler-operator/kepler\" is deployed to all nodes and available; ready 2/2 observedGeneration: 1 reason: DaemonSetReady status: 'True' type: Available currentNumberScheduled: 2 2 desiredNumberScheduled: 2 3",
"apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: redfish: secretRef: <secret_name> required 1 probeInterval: 60s 2 skipSSLVerify: false 3",
"<your_kubelet_node_name>,<redfish_username>,<redfish_password>,https://<redfish_ip_or_hostname>/",
"control-plane,exampleuser,examplepass,https://redfish.nodes.example.com worker-1,exampleuser,examplepass,https://redfish.nodes.example.com worker-2,exampleuser,examplepass,https://another.redfish.nodes.example.com",
"oc -n openshift-power-monitoring create secret generic redfish-secret --from-file=redfish.csv",
"apiVersion: v1 kind: Secret metadata: name: redfish-secret data: redfish.csv: YmFyCg== #"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/power_monitoring/index |
7.51. evolution-data-server | 7.51. evolution-data-server 7.51.1. RHBA-2013:0410 - evolution-data-server bug fix update Updated evolution-data-server packages that fix one bug are now available for Red Hat Enterprise Linux 6. The evolution-data-server packages provide a unified back end for applications which interact with contacts, task and calendar information. Evolution Data Server was originally developed as a back end for Evolution, but is now used by various other applications. Bug Fix BZ# 734048 The CalDav calendar back end was converting Uniform Resource Identifiers (URIs) with unescaped space characters or the "%20" string to "%2520". As a consequence, rendering the back end did not allow to contact the remote CalDav service that caused CalDav calendars to be inaccessible. This bug has been fixed and evolution-data-server works correctly in the described scenario. All users of evolution-data-server are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/evolution-data-server |
1.2. Red Hat Virtualization Host | 1.2. Red Hat Virtualization Host A Red Hat Virtualization environment has one or more hosts attached to it. A host is a server that provides the physical hardware that virtual machines make use of. Red Hat Virtualization Host (RHVH) runs an optimized operating system installed using a special, customized installation media specifically for creating virtualization hosts. Red Hat Enterprise Linux hosts are servers running a standard Red Hat Enterprise Linux operating system that has been configured after installation to permit use as a host. Both methods of host installation result in hosts that interact with the rest of the virtualized environment in the same way, and so, will both referred to as hosts . Figure 1.2. Host Architecture Kernel-based Virtual Machine (KVM) The Kernel-based Virtual Machine (KVM) is a loadable kernel module that provides full virtualization through the use of the Intel VT or AMD-V hardware extensions. Though KVM itself runs in kernel space, the guests running upon it run as individual QEMU processes in user space. KVM allows a host to make its physical hardware available to virtual machines. QEMU QEMU is a multi-platform emulator used to provide full system emulation. QEMU emulates a full system, for example a PC, including one or more processors, and peripherals. QEMU can be used to launch different operating systems or to debug system code. QEMU, working in conjunction with KVM and a processor with appropriate virtualization extensions, provides full hardware assisted virtualization. Red Hat Virtualization Manager Host Agent, VDSM In Red Hat Virtualization, VDSM initiates actions on virtual machines and storage. It also facilitates inter-host communication. VDSM monitors host resources such as memory, storage, and networking. Additionally, VDSM manages tasks such as virtual machine creation, statistics accumulation, and log collection. A VDSM instance runs on each host and receives management commands from the Red Hat Virtualization Manager using the re-configurable port 54321 . VDSM-REG VDSM uses VDSM-REG to register each host with the Red Hat Virtualization Manager. VDSM-REG supplies information about itself and its host using port 80 or port 443 . libvirt Libvirt facilitates the management of virtual machines and their associated virtual devices. When Red Hat Virtualization Manager initiates virtual machine life-cycle commands (start, stop, reboot), VDSM invokes libvirt on the relevant host machines to execute them. Storage Pool Manager, SPM The Storage Pool Manager (SPM) is a role assigned to one host in a data center. The SPM host has sole authority to make all storage domain structure metadata changes for the data center. This includes creation, deletion, and manipulation of virtual disks, snapshots, and templates. It also includes allocation of storage for sparse block devices on a Storage Area Network(SAN). The role of SPM can be migrated to any host in a data center. As a result, all hosts in a data center must have access to all the storage domains defined in the data center. Red Hat Virtualization Manager ensures that the SPM is always available. In case of storage connectivity errors, the Manager re-assigns the SPM role to another host. Guest Operating System Guest operating systems do not need to be modified to be installed on virtual machines in a Red Hat Virtualization environment. The guest operating system, and any applications on the guest, are unaware of the virtualized environment and run normally. Red Hat provides enhanced device drivers that allow faster and more efficient access to virtualized devices. You can also install the Red Hat Virtualization Guest Agent on guests, which provides enhanced guest information to the management console. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/red_hat_virtualization_host |
Console APIs | Console APIs OpenShift Container Platform 4.18 Reference guide for console APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/console_apis/index |
Chapter 6. Starting to use Red Hat Quay | Chapter 6. Starting to use Red Hat Quay With Red Hat Quay now running, you can: Select Tutorial from the Quay home page to try the 15-minute tutorial. In the tutorial, you learn to log into Quay, start a container, create images, push repositories, view repositories, and change repository permissions with Quay. Refer to the Use Red Hat Quay for information on working with Red Hat Quay repositories. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/starting_to_use_red_hat_quay |
Chapter 67. Microsoft SQL Server Sink | Chapter 67. Microsoft SQL Server Sink Send data to a Microsoft SQL Server Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 67.1. Configuration Options The following table summarizes the configuration options available for the sqlserver-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured SQL Server Database string query * Query The Query to execute against the SQL Server Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured SQL Server Database string serverPort Server Port Server Port for the data source string 1433 Note Fields marked with an asterisk (*) are mandatory. 67.2. Dependencies At runtime, the sqlserver-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001 mvn:com.microsoft.sqlserver:mssql-jdbc:9.2.1.jre11 67.3. Usage This section describes how you can use the sqlserver-sink . 67.3.1. Knative Sink You can use the sqlserver-sink Kamelet as a Knative sink by binding it to a Knative object. sqlserver-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 67.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 67.3.1.2. Procedure for using the cluster CLI Save the sqlserver-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sqlserver-sink-binding.yaml 67.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 67.3.2. Kafka Sink You can use the sqlserver-sink Kamelet as a Kafka sink by binding it to a Kafka topic. sqlserver-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 67.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 67.3.2.2. Procedure for using the cluster CLI Save the sqlserver-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sqlserver-sink-binding.yaml 67.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 67.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/sqlserver-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f sqlserver-sink-binding.yaml",
"kamel bind channel:mychannel sqlserver-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f sqlserver-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/microsoft-sql-server-sink |
Chapter 4. Using the Repository custom resource | Chapter 4. Using the Repository custom resource The Repository custom resource (CR) has the following primary functions: Inform Pipelines as Code about processing an event from a URL. Inform Pipelines as Code about the namespace for the pipeline runs. Reference an API secret, username, or an API URL necessary for Git provider platforms when using webhook methods. Provide the last pipeline run status for a repository. 4.1. Creating the Repository custom resource You can use the tkn pac CLI or other alternative methods to create a Repository custom resource (CR) inside the target namespace. For example: cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: project-repository spec: url: "https://github.com/<repository>/<project>" EOF 1 my-pipeline-ci is the target namespace. Whenever there is an event coming from the URL such as https://github.com/<repository>/<project> , Pipelines as Code matches it and then starts checking out the content of the <repository>/<project> repository for the pipeline run to match the content in the .tekton/ directory. Note You must create the Repository CR in the same namespace where pipelines associated with the source code repository will be executed; it cannot target a different namespace. If multiple Repository CRs match the same event, Pipelines as Code processes only the oldest one. If you need to match a specific namespace, add the pipelinesascode.tekton.dev/target-namespace: "<mynamespace>" annotation. Such explicit targeting prevents a malicious actor from executing a pipeline run in a namespace to which they do not have access. 4.2. Creating the global Repository custom resource Optionally, you can create a global Repository custom resource (CR) in the namespace where OpenShift Pipelines is installed, normally openshift-pipelines . If you create this CR, the settings that you specify in it apply by default to all Repository CRs that you create. Important The global Repository CR is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have administrator access to the openshift-pipelines namespace. You logged on to the OpenShift cluster using the oc command line utility. Procedure Create a Repository CR named pipeline-as-code in the openshift-pipelines namespace. Specify all the required default settings in this CR. Example command to create the CR USD cat <<EOF|oc create -n openshift-pipelines -f - apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: pipelines-as-code spec: git_provider: secret: name: "gitlab-webhook-config" key: "provider.token" webhook_secret: name: "gitlab-webhook-config" key: "webhook.secret" EOF In this example, all Repository CRs that you create include the common secrets for accessing your GitLab repositories. You can set different repository URLs and other settings in the CRs. 4.3. Setting concurrency limits You can use the concurrency_limit spec in the Repository custom resource definition (CRD) to define the maximum number of pipeline runs running simultaneously for a repository. apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: # ... concurrency_limit: <number> # ... If there are multiple pipeline runs matching an event, the pipeline runs that match the event start in an alphabetical order. For example, if you have three pipeline runs in the .tekton directory and you create a pull request with a concurrency_limit of 1 in the repository configuration, then all the pipeline runs are executed in an alphabetical order. At any given time, only one pipeline run is in the running state while the rest are queued. 4.4. Changing the source branch for the pipeline definition By default, when processing a push event or a pull request event, Pipelines as Code fetches the pipeline definition from the branch that triggered the event. You can use the pipelinerun_provenance setting in the Repository custom resource definition (CRD) to fetch the definition from the default branch configured on the Git repository provider, such as main , master , or trunk . apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: # ... settings: pipelinerun_provenance: "default_branch" # ... Note You can use this setting as a security precaution. With the default behaviour, Pipelines as Code uses the pipeline definition in the submitted pull request. With the default-branch setting, the pipeline definition must be merged into the default branch before it is run. This requirement ensures maximum possible verification of any changes during merge review. 4.5. Custom parameter expansion You can use Pipelines as Code to expand a custom parameter within your PipelineRun resource by using the params field. You can specify a value for the custom parameter inside the template of the Repository custom resource (CR). The specified value replaces the custom parameter in your pipeline run. You can use custom parameters in the following scenarios: To define a URL parameter, such as a registry URL that varies based on a push or a pull request. To define a parameter, such as an account UUID that an administrator can manage without necessitating changes to the PipelineRun execution in the Git repository. Note Use the custom parameter expansion feature only when you cannot use the Tekton PipelineRun parameters because Tekton parameters are defined in a Pipeline resource and customized alongside it inside a Git repository. However, custom parameters are defined and customized where the Repository CR is located. So, you cannot manage your CI/CD pipeline from a single point. The following example shows a custom parameter named company in the Repository CR: ... spec: params: - name: company value: "ABC Company" ... The value ABC Company replaces the parameter name company in your pipeline run and in the remotely fetched tasks. You can also retrieve the value for a custom parameter from a Kubernetes secret, as shown in the following example: ... spec: params: - name: company secretRef: name: my-secret key: companyname ... Pipelines as Code parses and uses custom parameters in the following manner: If you have a value and a secretRef defined, Pipelines as Code uses the value . If you do not have a name in the params section, Pipelines as Code does not parse the parameter. If you have multiple params with the same name , Pipelines as Code uses the last parameter. You can also define a custom parameter and use its expansion only when specified conditions were matched for a CEL filter. The following example shows a CEL filter applicable on a custom parameter named company when a pull request event is triggered: ... spec: params: - name: company value: "ABC Company" filter: - name: event value: | pac.event_type == "pull_request" ... Note When you have multiple parameters with the same name and different filters, Pipelines as Code uses the first parameter that matches the filter. So, Pipelines as Code allows you to expand parameters according to different event types. For example, you can combine a push and a pull request event. | [
"cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: project-repository spec: url: \"https://github.com/<repository>/<project>\" EOF",
"cat <<EOF|oc create -n openshift-pipelines -f - apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: pipelines-as-code spec: git_provider: secret: name: \"gitlab-webhook-config\" key: \"provider.token\" webhook_secret: name: \"gitlab-webhook-config\" key: \"webhook.secret\" EOF",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: concurrency_limit: <number>",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: settings: pipelinerun_provenance: \"default_branch\"",
"spec: params: - name: company value: \"ABC Company\"",
"spec: params: - name: company secretRef: name: my-secret key: companyname",
"spec: params: - name: company value: \"ABC Company\" filter: - name: event value: | pac.event_type == \"pull_request\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_as_code/using-repository-crd |
Chapter 7. Installing a cluster on AWS in a restricted network | Chapter 7. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.13, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 7.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. About using a custom VPC In OpenShift Container Platform 4.13, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 7.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 7.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 7.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 7.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 7.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 7.6.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 7.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. 7.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.3. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 7.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/installing-restricted-networks-aws-installer-provisioned |
Chapter 6. LVM Configuration Examples | Chapter 6. LVM Configuration Examples This chapter provides some basic LVM configuration examples. 6.1. Creating an LVM Logical Volume on Three Disks This example creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 6.1.1. Creating the Physical Volumes To use disks in a volume group, you label them as LVM physical volumes. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 6.1.2. Creating the Volume Group The following command creates the volume group new_vol_group . You can use the vgs command to display the attributes of the new volume group. 6.1.3. Creating the Logical Volume The following command creates the logical volume new_logical_volume from the volume group new_vol_group . This example creates a logical volume that uses 2GB of the volume group. 6.1.4. Creating the File System The following command creates a GFS2 file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage. | [
"pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created",
"vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group \"new_vol_group\" successfully created",
"vgs VG #PV #LV #SN Attr VSize VFree new_vol_group 3 0 0 wz--n- 51.45G 51.45G",
"lvcreate -L2G -n new_logical_volume new_vol_group Logical volume \"new_logical_volume\" created",
"mkfs.gfs2 -plock_nolock -j 1 /dev/new_vol_group/new_logical_volume This will destroy any data on /dev/new_vol_group/new_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/new_vol_group/new_logical_volume Blocksize: 4096 Filesystem Size: 491460 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done",
"mount /dev/new_vol_group/new_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/new_vol_group/new_logical_volume 1965840 20 1965820 1% /mnt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_examples |
Chapter 4. Distributed tracing platform (Jaeger) | Chapter 4. Distributed tracing platform (Jaeger) 4.1. Installing Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. You can install Red Hat OpenShift distributed tracing platform on OpenShift Container Platform in either of two ways: You can install Red Hat OpenShift distributed tracing platform as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install Red Hat OpenShift distributed tracing platform as part of a service mesh, follow the Red Hat Service Mesh Installation instructions. You must install Red Hat OpenShift distributed tracing platform in the same namespace as your service mesh, that is, the ServiceMeshControlPlane and the Red Hat OpenShift distributed tracing platform resources must be in the same namespace. If you do not want to install a service mesh, you can use the Red Hat OpenShift distributed tracing platform Operators to install distributed tracing platform by itself. To install Red Hat OpenShift distributed tracing platform without a service mesh, use the following instructions. 4.1.1. Prerequisites Before you can install Red Hat OpenShift distributed tracing platform, review the installation activities, and ensure that you meet the prerequisites: Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.16 overview . Install OpenShift Container Platform 4.16. Install OpenShift Container Platform 4.16 on AWS Install OpenShift Container Platform 4.16 on user-provisioned AWS Install OpenShift Container Platform 4.16 on bare metal Install OpenShift Container Platform 4.16 on vSphere Install the version of the oc CLI tool that matches your OpenShift Container Platform version and add it to your path. An account with the cluster-admin role. 4.1.2. Red Hat OpenShift distributed tracing platform installation overview The steps for installing Red Hat OpenShift distributed tracing platform are as follows: Review the documentation and determine your deployment strategy. If your deployment strategy requires persistent storage, install the OpenShift Elasticsearch Operator via the OperatorHub. Install the Red Hat OpenShift distributed tracing platform (Jaeger) Operator via the OperatorHub. Modify the custom resource YAML file to support your deployment strategy. Deploy one or more instances of Red Hat OpenShift distributed tracing platform (Jaeger) to your OpenShift Container Platform environment. 4.1.3. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform (Jaeger) deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing platform, giving demonstrations, or using Red Hat OpenShift distributed tracing platform (Jaeger) in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform (Jaeger) in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing platform Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait for the InstallSucceeded status of the OpenShift Elasticsearch Operator before continuing. 4.1.4. Installing the Red Hat OpenShift distributed tracing platform Operator You can install the Red Hat OpenShift distributed tracing platform Operator through the OperatorHub . By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Search for the Red Hat OpenShift distributed tracing platform Operator by entering distributed tracing platform in the search field. Select the Red Hat OpenShift distributed tracing platform Operator, which is provided by Red Hat , to display information about the Operator. Click Install . For the Update channel on the Install Operator page, select stable to automatically update the Operator when new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. Note If you accept this default, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of this Operator when a new version of the Operator becomes available. If you select Manual updates, the OLM creates an update request when a new version of the Operator becomes available. To update the Operator to the new version, you must then manually approve the update request as a cluster administrator. The Manual approval strategy requires a cluster administrator to manually approve Operator installation and subscription. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait for the Succeeded status of the Red Hat OpenShift distributed tracing platform Operator before continuing. 4.2. Configuring Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform (Jaeger) resources. You can install the default configuration or modify the file. If you have installed distributed tracing platform as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the ServiceMeshControlPlane , but for complete control, you must configure a Jaeger CR and then reference your distributed tracing configuration file in the ServiceMeshControlPlane . The Red Hat OpenShift distributed tracing platform (Jaeger) has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a distributed tracing platform (Jaeger) instance, the Operator uses this configuration file to create the objects necessary for the deployment. Jaeger custom resource file showing deployment strategy apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1 1 Deployment strategy. 4.2.1. Supported deployment strategies The Red Hat OpenShift distributed tracing platform (Jaeger) Operator currently supports the following deployment strategies: allInOne - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage. Note In-memory storage is not persistent, which means that if the distributed tracing platform (Jaeger) instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. streaming The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform ( AMQ Streams / Kafka ). Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. The streaming deployment strategy is currently unsupported on IBM Z(R). 4.2.2. Deploying the distributed tracing platform default strategy from the web console The custom resource definition (CRD) defines the configuration used when you deploy an instance of Red Hat OpenShift distributed tracing platform. The default CR is named jaeger-all-in-one-inmemory and it is configured with minimal resources to ensure that you can successfully install it on a default OpenShift Container Platform installation. You can use this default configuration to create a Red Hat OpenShift distributed tracing platform (Jaeger) instance that uses the AllInOne deployment strategy, or you can define your own custom resource file. Note In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Go to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Details tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, to install using the defaults, click Create to create the distributed tracing platform (Jaeger) instance. On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-all-in-one-inmemory . On the Jaeger Details page, click the Resources tab. Wait until the pod has a status of "Running" before continuing. 4.2.2.1. Deploying the distributed tracing platform default strategy from the CLI Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed and verified. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role by running the following command: USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system by running the following command: USD oc new-project tracing-system Create a custom resource file named jaeger.yaml that contains the following text: Example jaeger-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory Run the following command to deploy distributed tracing platform (Jaeger): USD oc create -n tracing-system -f jaeger.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, the output is similar to the following example: NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s 4.2.3. Deploying the distributed tracing platform production strategy from the web console The production deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your production YAML configuration, for example: Example jaeger-production.yaml file with Elasticsearch apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *' Click Create to create the distributed tracing platform (Jaeger) instance. On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-prod-elasticsearch . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 4.2.3.1. Deploying the distributed tracing platform production strategy from the CLI Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI ( oc ) as a user with the cluster-admin role by running the following command: USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system by running the following command: USD oc new-project tracing-system Create a custom resource file named jaeger-production.yaml that contains the text of the example file in the procedure. Run the following command to deploy distributed tracing platform (Jaeger): USD oc create -n tracing-system -f jaeger-production.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you will see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s 4.2.4. Deploying the distributed tracing platform streaming strategy from the web console The streaming deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. The streaming strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. If you do not have an AMQ Streams subscription, contact your sales representative for more information. Note The streaming deployment strategy is currently unsupported on IBM Z(R). Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your streaming YAML configuration, for example: Example jaeger-streaming.yaml file apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 If the brokers are not defined, AMQStreams 1.4.0+ self-provisions Kafka. Click Create to create the distributed tracing platform (Jaeger) instance. On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-streaming . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 4.2.4.1. Deploying the distributed tracing platform streaming strategy from the CLI Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI ( oc ) as a user with the cluster-admin role by running the following command: USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system by running the following command: USD oc new-project tracing-system Create a custom resource file named jaeger-streaming.yaml that contains the text of the example file in the procedure. Run the following command to deploy Jaeger: USD oc create -n tracing-system -f jaeger-streaming.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s 4.2.5. Validating your deployment 4.2.5.1. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing platform installed, and Red Hat OpenShift distributed tracing platform (Jaeger) installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from the web console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role by running the following command. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 4.2.6. Customizing your deployment 4.2.6.1. Deployment best practices Red Hat OpenShift distributed tracing platform instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform (Jaeger) instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform (Jaeger) instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform (Jaeger) instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform (Jaeger) instance to each tenant namespace. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 4.2.6.2. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform (Jaeger) resources. You can modify these parameters to customize your distributed tracing platform (Jaeger) implementation to your business needs. Generic YAML example of the Jaeger CR apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 4.1. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform (Jaeger) instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform (Jaeger) instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform (Jaeger) deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 4.2.6.3. Using taints and tolerations To schedule the Jaeger and Elasticsearch pods on dedicated nodes, see How to deploy the different Jaeger components on infra nodes using nodeSelector and tolerations in OpenShift 4 . 4.2.6.4. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 4.2. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 4.3. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . To accept OTLP/gRPC, explicitly enable the otlp . All the other options are optional. To accept OTLP/HTTP, explicitly enable the otlp . All the other options are optional. 4.2.6.5. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform (Jaeger) libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 4.4. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform (Jaeger) uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 4.2.6.6. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 4.5. General storage parameters used by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform (Jaeger) supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 4.6. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 4.2.6.6.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the OpenShift Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform (Jaeger) with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 4.7. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform (Jaeger) should use the certificate management feature of the OpenShift Elasticsearch Operator. This feature was added to {logging-title} 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you must have no less than 16 Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64 Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform (Jaeger) uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform (Jaeger) instance. You can mount the same volumes if you create a distributed tracing platform (Jaeger) instance with the same name and namespace. 4.2.6.6.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing platform. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator or by the OpenShift Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform (Jaeger). The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance. The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 4.8. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 4.9. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 4.10. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 4.11. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 4.12. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 4.13. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 4.2.6.7. Managing certificates with Elasticsearch You can create and manage certificates using the OpenShift Elasticsearch Operator. Managing certificates using the OpenShift Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator delegates certificate creation to the OpenShift Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites The Red Hat OpenShift Service Mesh Operator is installed. The {logging-title} is installed with default configuration in your cluster. The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the OpenShift Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform (Jaeger) Operator injects the certificates. 4.2.6.8. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 4.14. Parameters used by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 4.15. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 4.2.6.9. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 4.16. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 4.2.7. Injecting sidecars The Red Hat OpenShift distributed tracing platform (Jaeger) relies on a proxy sidecar within the application's pod to provide the Agent. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can inject Agent sidecars into deployment workloads. You can enable automatic sidecar injection or manage it manually. 4.2.7.1. Automatically injecting sidecars The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can inject Jaeger Agent sidecars into deployment workloads. To enable automatic injection of sidecars, add the sidecar.jaegertracing.io/inject annotation set to either the string true or to the distributed tracing platform (Jaeger) instance name that is returned by running USD oc get jaegers . When you specify true , there must be only a single distributed tracing platform (Jaeger) instance for the same namespace as the deployment. Otherwise, the Operator is unable to determine which distributed tracing platform (Jaeger) instance to use. A specific distributed tracing platform (Jaeger) instance name on a deployment has a higher precedence than true applied on its namespace. The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single distributed tracing platform (Jaeger) instance available in the same namespace: Automatic sidecar injection example apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: "sidecar.jaegertracing.io/inject": "true" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion 1 Set to either the string true or to the Jaeger instance name. When the sidecar is injected, the agent can then be accessed at its default location on localhost . 4.2.7.2. Manually injecting sidecars The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can only automatically inject Jaeger Agent sidecars into Deployment workloads. For controller types other than Deployments , such as StatefulSets`and `DaemonSets , you can manually define the Jaeger agent sidecar in your specification. The following snippet shows the manual definition you can include in your containers section for a Jaeger agent sidecar: Sidecar definition example for a StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc The agent can then be accessed at its default location on localhost. 4.3. Upgrading Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. During an update, the Red Hat OpenShift distributed tracing platform Operators upgrade the managed distributed tracing platform instances to the version associated with the Operator. Whenever a new version of the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is installed, all the distributed tracing platform (Jaeger) application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running distributed tracing platform (Jaeger) instances and upgrades them to 1.11 as well. Important If you have not already updated your OpenShift Elasticsearch Operator as described in Updating OpenShift Logging , complete that update before updating your Red Hat OpenShift distributed tracing platform (Jaeger) Operator. 4.3.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators Updating OpenShift Logging 4.4. Removing Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. The steps for removing Red Hat OpenShift distributed tracing platform from an OpenShift Container Platform cluster are as follows: Shut down any Red Hat OpenShift distributed tracing platform pods. Remove any Red Hat OpenShift distributed tracing platform instances. Remove the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. Remove the Red Hat build of OpenTelemetry Operator. 4.4.1. Removing a distributed tracing platform (Jaeger) instance by using the web console You can remove a distributed tracing platform (Jaeger) instance in the Administrator view of the web console. Warning When deleting an instance that uses in-memory storage, all data is irretrievably lost. Data stored in persistent storage such as Elasticsearch is not deleted when a Red Hat OpenShift distributed tracing platform (Jaeger) instance is removed. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the name of the project where the Operators are installed from the Project menu, for example, openshift-operators . Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. Click the Jaeger tab. Click the Options menu to the instance you want to delete and select Delete Jaeger . In the confirmation message, click Delete . 4.4.2. Removing a distributed tracing platform (Jaeger) instance by using the CLI You can remove a distributed tracing platform (Jaeger) instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Log in with the OpenShift CLI ( oc ) by running the following command: USD oc login --username=<NAMEOFUSER> To display the distributed tracing platform (Jaeger) instances, run the following command: USD oc get deployments -n <jaeger-project> For example, USD oc get deployments -n openshift-operators The names of Operators have the suffix -operator . The following example shows two Red Hat OpenShift distributed tracing platform (Jaeger) Operators and four distributed tracing platform (Jaeger) instances: USD oc get deployments -n openshift-operators You will see output similar to the following: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m To remove an instance of distributed tracing platform (Jaeger), run the following command: USD oc delete jaeger <deployment-name> -n <jaeger-project> For example: USD oc delete jaeger tracing2 -n openshift-operators To verify the deletion, run the oc get deployments command again: USD oc get deployments -n <jaeger-project> For example: USD oc get deployments -n openshift-operators You will see generated output that is similar to the following example: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s 4.4.3. Removing the Red Hat OpenShift distributed tracing platform Operators Procedure Follow the instructions in Deleting Operators from a cluster to remove the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. Optional: After the Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been removed, remove the OpenShift Elasticsearch Operator. | [
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/distributed_tracing/distributed-tracing-platform-jaeger |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.