title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
7.3. X Server Configuration Files
7.3. X Server Configuration Files The X server is a single binary executable ( /usr/X11R6/bin/Xorg ) that dynamically loads any necessary X server modules at runtime from the /usr/X11R6/lib/modules/ directory. Some of these modules are automatically loaded by the server, while others are optional and must be specified in the X server configuration file. The X server and associated configuration files are stored in the /etc/X11/ directory. The configuration file for the X server is /etc/X11/xorg.conf . When Red Hat Enterprise Linux is installed, the configuration files for X are created using information gathered about the system hardware during the installation process. 7.3.1. xorg.conf While there is rarely a need to manually edit the /etc/X11/xorg.conf file, it is useful to understand the various sections and optional parameters available, especially when troubleshooting. 7.3.1.1. The Structure The /etc/X11/xorg.conf file is comprised of many different sections which address specific aspects of the system hardware. Each section begins with a Section " <section-name> " line (where <section-name> is the title for the section) and ends with an EndSection line. Within each of the sections are lines containing option names and at least one option value, sometimes surrounded with double quotes ( " ). Lines beginning with a hash mark ( # ) are not read by the X server and are used for human-readable comments. Some options within the /etc/X11/xorg.conf file accept a boolean switch which turns the feature on or off. Acceptable boolean values are: 1 , on , true , or yes - Turns the option on. 0 , off , false , or no - Turns the option off. The following are some of the more important sections in the order in which they appear in a typical /etc/X11/xorg.conf file. More detailed information about the X server configuration file can be found in the xorg.conf man page. 7.3.1.2. ServerFlags The optional ServerFlags section contains miscellaneous global X server settings. Any settings in this section may be overridden by options placed in the ServerLayout section (refer to Section 7.3.1.3, " ServerLayout " for details). Each entry within the ServerFlags section is on its own line and begins with the term Option followed by an option enclosed in double quotation marks ( " ). The following is a sample ServerFlags section: The following lists some of the most useful options: "DontZap" " <boolean> " - When the value of <boolean> is set to true, this setting prevents the use of the Ctrl + Alt + Backspace key combination to immediately terminate the X server. "DontZoom" " <boolean> " - When the value of <boolean> is set to true, this setting prevents cycling through configured video resolutions using the Ctrl + Alt + Keypad-Plus and Ctrl + Alt + Keypad-Minus key combinations. 7.3.1.3. ServerLayout The ServerLayout section binds together the input and output devices controlled by the X server. At a minimum, this section must specify one output device and at least two input devices (a keyboard and a mouse). The following example illustrates a typical ServerLayout section: The following entries are commonly used in the ServerLayout section: Identifier - Specifies a unique name for this ServerLayout section. Screen - Specifies the name of a Screen section to be used with the X server. More than one Screen option may be present. The following is an example of a typical Screen entry: The first number in this example Screen entry ( 0 ) indicates that the first monitor connector or head on the video card uses the configuration specified in the Screen section with the identifier "Screen0" . If the video card has more than one head, another Screen entry would be necessary with a different number and a different Screen section identifier. The numbers to the right of "Screen0" give the X and Y absolute coordinates for the upper-left corner of the screen ( 0 0 by default). InputDevice - Specifies the name of an InputDevice section to be used with the X server. There must be at least two InputDevice entries: one for the default mouse and one for the default keyboard. The options CorePointer and CoreKeyboard indicate that these are the primary mouse and keyboard. Option " <option-name> " - An optional entry which specifies extra parameters for the section. Any options listed here override those listed in the ServerFlags section. Replace <option-name> with a valid option listed for this section in the xorg.conf man page. It is possible to create more than one ServerLayout section. However, the server only reads the first one to appear unless an alternate ServerLayout section is specified as a command line argument. 7.3.1.4. Files The Files section sets paths for services vital to the X server, such as the font path. The following example illustrates a typical Files section: The following entries are commonly used in the Files section: RgbPath - Specifies the location of the RGB color database. This database defines all valid color names in X and ties them to specific RGB values. FontPath - Specifies where the X server must connect to obtain fonts from the xfs font server. By default, the FontPath is unix/:7100 . This tells the X server to obtain font information using UNIX-domain sockets for inter-process communication (IPC) on port 7100. Refer to Section 7.4, "Fonts" for more information concerning X and fonts. ModulePath - An optional parameter which specifies alternate directories which store X server modules. 7.3.1.5. Module The Module section specifies which modules from the /usr/X11R6/lib/modules/ directory the X server is to load. Modules add additional functionality to the X server. The following example illustrates a typical Module section: 7.3.1.6. InputDevice Each InputDevice section configures one input device for the X server. Systems typically have at least two InputDevice sections, keyboard and mouse. The following example illustrates a typical InputDevice section for a mouse: The following entries are commonly used in the InputDevice section: Identifier - Specifies a unique name for this InputDevice section. This is a required entry. Driver - Specifies the name of the device driver X must load for the device. Option - Specifies necessary options pertaining to the device. For a mouse, these options typically include: Protocol - Specifies the protocol used by the mouse, such as IMPS/2 . Device - Specifies the location of the physical device. Emulate3Buttons - Specifies whether to allow a two button mouse to act like a three button mouse when both mouse buttons are pressed simultaneously. Consult the xorg.conf man page for a list of valid options for this section. By default, the InputDevice section has comments to allow users to configure additional options. 7.3.1.7. Monitor Each Monitor section configures one type of monitor used by the system. While one Monitor section is the minimum, additional instances may occur for each monitor type in use with the machine. The best way to configure a monitor is to configure X during the installation process or by using the X Configuration Tool . For more about using the X Configuration Tool , refer to the chapter titled X Window System Configuration in the System Administrators Guide . This example illustrates a typical Monitor section for a monitor: Warning Be careful if manually editing values in the Monitor section of /etc/X11/xorg.conf . Inappropriate values can damage or destroy a monitor. Consult the monitor's documentation for a listing of safe operating parameters. The following are commonly entries used in the Monitor section: Identifier - Specifies a unique name for this Monitor section. This is a required entry. VendorName - An optional parameter which specifies the vendor of the monitor. ModelName - An optional parameter which specifies the monitor's model name. DisplaySize - An optional parameter which specifies, in millimeters, the physical size of the monitor's picture area. HorizSync - Specifies the range of horizontal sync frequencies compatible with the monitor in kHz. These values help the X server determine the validity of built in or specified Modeline entries for the monitor. VertRefresh - Specifies the range of vertical refresh frequencies supported by the monitor, in kHz. These values help the X server determine the validity of built in or specified Modeline entries for the monitor. Modeline - An optional parameter which specifies additional video modes for the monitor at particular resolutions, with certain horizontal sync and vertical refresh resolutions. Refer to the xorg.conf man page for a more detailed explanation of Modeline entries. Option " <option-name> " - An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf man page. 7.3.1.8. Device Each Device section configures one video card on the system. While one Device section is the minimum, additional instances may occur for each video card installed on the machine. The best way to configure a video card is to configure X during the installation process or by using the X Configuration Tool . For more about using the X Configuration Tool , refer to the chapter titled X Window System Configuration in the System Administrators Guide . The following example illustrates a typical Device section for a video card: The following entries are commonly used in the Device section: Identifier - Specifies a unique name for this Device section. This is a required entry. Driver - Specifies which driver the X server must load to utilize the video card. A list of drivers can be found in /usr/X11R6/lib/X11/Cards , which is installed with the hwdata package. VendorName - An optional parameter which specifies the vendor of the video card. BoardName - An optional parameter which specifies the name of the video card. VideoRam - An optional parameter which specifies the amount of RAM available on the video card in kilobytes. This setting is only necessary for video cards the X server cannot probe to detect the amount of video RAM. BusID - An optional entry which specifies the bus location of the video card. This option is only mandatory for systems with multiple cards. Screen - An optional entry which specifies which monitor connector or head on the video card the Device section configures. This option is only useful for video cards with multiple heads. If multiple monitors are connected to different heads on the same video card, separate Device sections must exist and each of these sections must have a different Screen value. Values for the Screen entry must be an integer. The first head on the video card has a value of 0 . The value for each additional head increments this value by one. Option " <option-name> " - An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf man page. One of the more common options is "dpms" , which activates the Service Star energy compliance setting for the monitor. 7.3.1.9. Screen Each Screen section binds one video card (or video card head) to one monitor by referencing the Device section and the Monitor section for each. While one Screen section is the minimum, additional instances may occur for each video card and monitor combination present on the machine. The following example illustrates a typical Screen section: The following entries are commonly used in the Screen section: Identifier - Specifies a unique name for this Screen section. This is a required entry. Device - Specifies the unique name of a Device section. This is a required entry. Monitor - Specifies the unique name of a Monitor section. This is a required entry. DefaultDepth - Specifies the default color depth in bits. In the example, 16 , which provides thousands of colors, is the default. Multiple DefaultDepth entries are permitted, but at least one is required. SubSection "Display" - Specifies the screen modes available at a particular color depth. A Screen section may have multiple Display subsections, but at least one is required for the color depth specified in the DefaultDepth entry. Option " <option-name> " - An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf man page. 7.3.1.10. DRI The optional DRI section specifies parameters for the Direct Rendering Infrastructure ( DRI ). DRI is an interface which allows 3D software applications to take advantage of 3D hardware acceleration capabilities built into most modern video hardware. In addition, DRI can improve 2D performance via hardware acceleration, if supported by the video card driver. This section is ignored unless DRI is enabled in the Module section. The following example illustrates a typical DRI section: Since different video cards use DRI in different ways, do not alter the values for this section without first referring to http://dri.sourceforge.net/ .
[ "Section \"ServerFlags\" Option \"DontZap\" \"true\" EndSection", "Section \"ServerLayout\" Identifier \"Default Layout\" Screen 0 \"Screen0\" 0 0 InputDevice \"Mouse0\" \"CorePointer\" InputDevice \"Keyboard0\" \"CoreKeyboard\" EndSection", "Screen 0 \"Screen0\" 0 0", "Section \"Files\" RgbPath \"/usr/X11R6/lib/X11/rgb\" FontPath \"unix/:7100\" EndSection", "Section \"Module\" Load \"dbe\" Load \"extmod\" Load \"fbdevhw\" Load \"glx\" Load \"record\" Load \"freetype\" Load \"type1\" Load \"dri\" EndSection", "Section \"InputDevice\" Identifier \"Mouse0\" Driver \"mouse\" Option \"Protocol\" \"IMPS/2\" Option \"Device\" \"/dev/input/mice\" Option \"Emulate3Buttons\" \"no\" EndSection", "Section \"Monitor\" Identifier \"Monitor0\" VendorName \"Monitor Vendor\" ModelName \"DDC Probed Monitor - ViewSonic G773-2\" DisplaySize 320 240 HorizSync 30.0 - 70.0 VertRefresh 50.0 - 180.0 EndSection", "Section \"Device\" Identifier \"Videocard0\" Driver \"mga\" VendorName \"Videocard vendor\" BoardName \"Matrox Millennium G200\" VideoRam 8192 Option \"dpms\" EndSection", "Section \"Screen\" Identifier \"Screen0\" Device \"Videocard0\" Monitor \"Monitor0\" DefaultDepth 16 SubSection \"Display\" Depth 24 Modes \"1280x1024\" \"1280x960\" \"1152x864\" \"1024x768\" \"800x600\" \"640x480\" EndSubSection SubSection \"Display\" Depth 16 Modes \"1152x864\" \"1024x768\" \"800x600\" \"640x480\" EndSubSection EndSection", "Section \"DRI\" Group 0 Mode 0666 EndSection" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-x-server-configuration
Chapter 16. General Updates
Chapter 16. General Updates The default value of first_valid_uid in Dovecot has changed in Red Hat Enterprise Linux 7 Since Red Hat Enterprise Linux 7.3, the default value of the first_valid_uid configuration option of Dovecot has changed from 500 in Red Hat Enterprise Linux 6 to 1000 in Red Hat Enterprise Linux 7. Consequently, if a Red Hat Enterprise Linux 6 installation does not have first_valid_uid explicitly defined, the Dovecot configuration will not allow users with UID less than 1000 to log in after the update to Red Hat Enterprise Linux 7. To avoid breaking the configuration, redefine first_valid_uid to 500 after the upgrade in the /etc/dovecot/conf.d/10-mail.conf file. Note that only installations where first_valid_uid is not explicitly defined are affected by this problem. (BZ# 1388967 ) Incorrect information about the expected default settings of services in Red Hat Enterprise Linux 7 The module of Preupgrade Assistant that handles initscripts provides incorrect information about the expected default settings of the services in Red Hat Enterprise Linux 7 according to the /usr/lib/systemd/system-preset/90-default.preset file in Red Hat Enterprise Linux 7 and according to the current settings of the Red Hat Enterprise Linux 6 system. In addition, the module does not check the default settings of the system but only the settings for the runlevel used during the processing of the check script, which might not be the default runlevel of the system. As a consequence, initscripts are not handled in the anticipated way and the new system needs more manual action than expected. However, the user is informed about the settings that will be chosen for relevant services, despite the presumable default settings. (BZ#1366671) Manually created configuration might not work correctly with the named-chroot service after upgrading When you use the the named-chroot service and when you have your own manually created configuration files in the /var/named/chroot/ directory, the service might not work properly on the target system after the upgrade to Red Hat Enterprise Linux 7. The options section in the used configuration files must contain the session-keyfile and pid-file directives, such as in the following example: The Preupgrade Assistant modules do not check or fix the manually created files in the /var/named/chroot/ directory. To work around this problem, manually insert the lines above to the options section. If you do not have your own manually created configuration files in /var/named/chroot/ , the configuration files of bind , including the /etc/named.conf file, are used. These configuration files are checked and fixed by the Preupgrade Assistant modules. (BZ# 1473233 )
[ "session-keyfile \"/run/named/session.key\"; pid-file \"/run/named/named.pid\";" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/known_issues_general_updates
Chapter 1. Overview
Chapter 1. Overview 1.1. Major changes in RHEL 8.5 Installer and image creation In RHEL 8.5, Image Builder supports the following features: Ability to customize filesystem configuration. Ability to override official repositories available Ability to create bootable installer images and install them to a bare metal system. For more information, see Section 4.1, "Installer and image creation" . RHEL for Edge RHEL 8.5 introduces RHEL for Edge Simplified Installer image, optimized for unattended installation to a device, and provisioning the image to a RHEL for Edge image. For more information, see Section 4.2, "RHEL for Edge" . Security The system-wide cryptographic policies support scopes and wildcards for directives in custom policies. You can now enable different sets of algorithms for different back ends. The Rsyslog log processing application has been updated to version 8.2102.0-5. This update introduces, among other improvements, the OpenSSL network stream driver. This implements TLS-protected transport using the OpenSSL library into Rsyslog. The SCAP Security Guide project now includes several new profiles and improvements of existing profiles: A new profile aligned with the Australian Cyber Security Centre Information Security Manual (ACSC ISM). The Center for Internet Security (CIS) profile restructured into four different profiles (Workstation Level 1, Workstation Level 2, Server Level 1, Server Level 2). The Security Technical Implementation Guide (STIG) security profile updated to version V1R3. A new STIG profile compatible with Server with GUI installations. A new French National Security Agency (ANSSI) High Level profile, which completes the availability of profiles for all ANSSI-BP-028 v1.2 hardening levels in the SCAP Security Guide . With these enhancements, you can install a system that conforms with one of these security baselines and use the OpenSCAP suite for checking security compliance and remediation using the risk-based approach for security controls defined by the relevant authorities. See New features - Security for more information. The new RHEL VPN System Role makes it easier to set up secure and properly configured IPsec tunneling and virtual private networking (VPN) solutions on large numbers of hosts. For more information, see New Features - Red Hat Enterprise Linux System Roles . Networking NetworkManager now supports configuring a device to accept all traffic. You can configure this feature using, for example, the nmcli utility. The firewalld service supports forwarding traffic between different interfaces or sources within a zone. The firewalld service supports filtering traffic that is forwarded between zones. Dynamic programming languages, web and database servers Later versions of the following components are now available as new module streams: Ruby 3.0 nginx 1.20 Node.js 16 The following components have been upgraded: PHP to version 7.4.19 Squid to version 4.15 Mutt to version 2.0.7 See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools The following compiler toolsets have been updated: GCC Toolset 11 LLVM Toolset 12.0.1 Rust Toolset 1.54.0 Go Toolset 1.16.7 See New features - Compilers and development tools for more information. OpenJDK updates Open Java Development Kit 17 (OpenJDK 17) is now available. For more information about the features introduced in this release and changes in the existing functionality, see OpenJDK documentation . OpenJDK 11 has been updated to version 11.0.13. For more information about the features introduced in this release and changes in the existing functionality, see OpenJDK documentation . OpenJDK 8 has been updated to version 8.0.312. For more information about the features introduced in this release and changes in the existing functionality, see OpenJDK documentation . Red Hat Enterprise Linux System Roles The Postfix RHEL System Role is fully supported. The Network Time Security (NTS) option is now added to the Timesync RHEL System Role . The Storage RHEL System Role now supports LVM VDO volumes and expresses volume sizes as a percentage. The new RHEL VPN System Role makes it easier to set up secure and properly configured IPsec tunneling and virtual private networking (VPN) solutions on large numbers of hosts. High Availability Cluster RHEL System Role is available as a Technology Preview for the 8.5 GA Release. See New features - Red Hat Enterprise Linux System Roles and Technology Previews - Red Hat Enterprise Linux System Roles for more information. 1.2. In-place upgrade and OS conversion In-place upgrade from RHEL 7 to RHEL 8 The supported in-place upgrade paths currently are: From RHEL 7.9 to RHEL 8.4 on the 64-bit Intel, IBM POWER 8 (little endian), and IBM Z architectures From RHEL 7.6 to RHEL 8.4 on architectures that require kernel version 4.14: IBM POWER 9 (little endian) and IBM Z (Structure A). This is the final in-place upgrade path for these architectures. From RHEL 7.7 to RHEL 8.2 on systems with SAP HANA. To ensure your system with SAP HANA remains supported after upgrading to RHEL 8.2, enable the RHEL 8.2 Update Services for SAP Solutions (E4S) repositories. To ensure your system remains supported after upgrading to RHEL 8.4, either update to the latest RHEL 8.5 version or ensure that the RHEL 8.4 Extended Update Support (EUS) repositories have been enabled. On systems with SAP HANA, enable the RHEL 8.2 Update Services for SAP Solutions (E4S) repositories. For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 7 to RHEL 8 . For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 . Notable enhancements include: It is now possible to perform an in-place upgrade with SAP HANA on Pay-As-You-Go instances on AWS with Red Hat Update Infrastructure (RHUI). It is now possible to enable EUS or E4S repositories during the in-place upgrade. The Leapp utility can now be installed using the yum install leapp-upgrade command. As part of this change, the leapp-repository and leapp-repository-deps RPM packages have been renamed leapp-upgrade-el7toel8 and leapp-upgrade-el7toel8-deps respectively. If the old packages are already installed on your system, they will be automatically replaced by the new packages when you run yum update . Leapp reports, logs, and other generated documentation are in English, regardless of the language configuration. After the upgrade, leftover Leapp packages must be manually removed from the exclude list in the /etc/dnf/dnf.conf configuration file before they can be removed from the system. The repomap.csv file, which is located in the leapp-data15.tar.gz archive, has been deprecated and has been replaced with the repomap.json file. The deprecated file will remain available until March 2022. The IBM POWER 9 (little endian) and IBM Z (Structure A) architectures have reached end of life. Subsequent releases to the in-place upgrade, including new upgrade paths, features, and bug fixes, will not include these architectures. In-place upgrade from RHEL 6 to RHEL 8 To upgrade from RHEL 6.10 to RHEL 8.4, follow instructions in Upgrading from RHEL 6 to RHEL 8 . Conversion from a different Linux distribution to RHEL If you are using CentOS Linux 8 or Oracle Linux 8, you can convert your operating system to RHEL 8 using the Red Hat-supported Convert2RHEL utility. For more information, see Converting from an RPM-based Linux distribution to RHEL . If you are using an earlier version of CentOS Linux or Oracle Linux, namely versions 6 or 7, you can convert your operating system to RHEL and then perform an in-place upgrade to RHEL 8. Note that CentOS Linux 6 and Oracle Linux 6 conversions use the unsupported Convert2RHEL utility. For more information on unsupported conversions, see How to perform an unsupported conversion from a RHEL-derived Linux distribution to RHEL . For information regarding how Red Hat supports conversions from other Linux distributions to RHEL, see the Convert2RHEL Support Policy document . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Product Life Cycle Checker Kickstart Generator Kickstart Converter Red Hat Enterprise Linux Upgrade Helper Red Hat Satellite Upgrade Helper Red Hat Code Browser JVM Options Configuration Tool Red Hat CVE Checker Red Hat Product Certificates Load Balancer Configuration Tool Yum Repository Configuration Helper Red Hat Memory Analyzer Kernel Oops Analyzer Red Hat Product Errata Advisory Checker Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 8 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 8. Major differences between RHEL 7 and RHEL 8 , including removed functionality, are documented in Considerations in adopting RHEL 8 . Instructions on how to perform an in-place upgrade from RHEL 7 to RHEL 8 are provided by the document Upgrading from RHEL 7 to RHEL 8 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is now available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.5_release_notes/overview
Chapter 3. Configuring certificates
Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Important To trigger the Ingress Operator to perform a rolling update, you must update the name of the secret. Because the kubelet automatically propagates changes to the secret in the volume mount, updating the secret contents does not trigger a rolling update. For more information, see this Red Hat Knowledgebase Solution . Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. Note In hosted control plane clusters, you cannot replace self-signed certificates from the API. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. Do not include the port number. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.14.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization
[ "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress", "oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator", "oc login -u kubeadmin -p <password> https://FQDN:6443", "oc config view --flatten > kubeconfig-newapi", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config", "oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2", "oc get apiserver cluster -o yaml", "spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.14.0 True False False 145m", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2", "oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1", "oc describe service <service_name>", "Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837", "oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true", "oc get configmap <config_map_name> -o yaml", "apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----", "oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true", "oc get apiservice <api_service_name> -o yaml", "apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>", "oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true", "oc get crd <crd_name> -o yaml", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>", "oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc describe service <service_name>", "service.beta.openshift.io/serving-cert-secret-name: <secret>", "oc delete secret <secret> 1", "oc get secret <service_name>", "NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s", "oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate", "oc delete secret/signing-key -n openshift-service-ca", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/configuring-certificates
2.5. Red Hat Enterprise Linux-Specific Information
2.5. Red Hat Enterprise Linux-Specific Information Red Hat Enterprise Linux comes with a variety of resource monitoring tools. While there are more than those listed here, these tools are representative in terms of functionality. The tools are: free top (and GNOME System Monitor , a more graphically oriented version of top ) vmstat The Sysstat suite of resource monitoring tools The OProfile system-wide profiler Let us examine each one in more detail. 2.5.1. free The free command displays system memory utilization. Here is an example of its output: The Mem: row displays physical memory utilization, while the Swap: row displays the utilization of the system swap space, and the -/+ buffers/cache: row displays the amount of physical memory currently devoted to system buffers. Since free by default only displays memory utilization information once, it is only useful for very short-term monitoring, or quickly determining if a memory-related problem is currently in progress. Although free has the ability to repetitively display memory utilization figures via its -s option, the output scrolls, making it difficult to easily detect changes in memory utilization. Note A better solution than using free -s would be to run free using the watch command. For example, to display memory utilization every two seconds (the default display interval for watch ), use this command: The watch command issues the free command every two seconds, updating by clearing the screen and writing the new output to the same screen location. This makes it much easier to determine how memory utilization changes over time, since watch creates a single updated view with no scrolling. You can control the delay between updates by using the -n option, and can cause any changes between updates to be highlighted by using the -d option, as in the following command: For more information, refer to the watch man page. The watch command runs until interrupted with Ctrl + C . The watch command is something to keep in mind; it can come in handy in many situations.
[ "total used free shared buffers cached Mem: 255508 240268 15240 0 7592 86188 -/+ buffers/cache: 146488 109020 Swap: 530136 26268 503868", "watch free", "watch -n 1 -d free" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-resource-rhlspec
Appendix B. Contact information
Appendix B. Contact information Red Hat Process Automation Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/author-group
1.3. JBoss Data Virtualization and ODBC
1.3. JBoss Data Virtualization and ODBC To learn how to configure ODBC for Red Hat JBoss Data Virtualization for Red Hat Enterprise Linux and Microsoft Windows, please refer to the Installation Guide.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/jboss_data_virtualization_and_odbc
Chapter 83. Using IdM user vaults: storing and retrieving secrets
Chapter 83. Using IdM user vaults: storing and retrieving secrets This chapter describes how to use user vaults in Identity Management. Specifically, it describes how a user can store a secret in an IdM vault, and how the user can retrieve it. The user can do the storing and the retrieving from two different IdM clients. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . 83.1. Storing a secret in a user vault Follow this procedure to create a vault container with one or more private vaults to securely store files with sensitive information. In the example used in the procedure below, the idm_user user creates a vault of the standard type. The standard vault type ensures that idm_user will not be required to authenticate when accessing the file. idm_user will be able to retrieve the file from any IdM client to which the user is logged in. In the procedure: idm_user is the user who wants to create the vault. my_vault is the vault used to store the user's certificate. The vault type is standard , so that accessing the archived certificate does not require the user to provide a vault password. secret.txt is the file containing the certificate that the user wants to store in the vault. Prerequisites You know the password of idm_user . You are logged in to a host that is an IdM client. Procedure Obtain the Kerberos ticket granting ticket (TGT) for idm_user : Use the ipa vault-add command with the --type standard option to create a standard vault: Important Make sure the first user vault for a user is created by the same user. Creating the first vault for a user also creates the user's vault container. The agent of the creation becomes the owner of the vault container. For example, if another user, such as admin , creates the first user vault for user1 , the owner of the user's vault container will also be admin , and user1 will be unable to access the user vault or create new user vaults. Use the ipa vault-archive command with the --in option to archive the secret.txt file into the vault: 83.2. Retrieving a secret from a user vault As an Identity Management (IdM), you can retrieve a secret from your user private vault onto any IdM client to which you are logged in. Follow this procedure to retrieve, as an IdM user named idm_user , a secret from the user private vault named my_vault onto idm_client.idm.example.com . Prerequisites idm_user is the owner of my_vault . idm_user has archived a secret in the vault . my_vault is a standard vault, which means that idm_user does not have to enter any password to access the contents of the vault. Procedure SSH to idm_client as idm_user : Log in as idm_user : Use the ipa vault-retrieve --out command with the --out option to retrieve the contents of the vault and save them into the secret_exported.txt file. 83.3. Additional resources See Using Ansible to manage IdM user vaults: storing and retrieving secrets .
[ "kinit idm_user", "ipa vault-add my_vault --type standard ---------------------- Added vault \"my_vault\" ---------------------- Vault name: my_vault Type: standard Owner users: idm_user Vault user: idm_user", "ipa vault-archive my_vault --in secret.txt ----------------------------------- Archived data into vault \"my_vault\" -----------------------------------", "ssh idm_user@idm_client.idm.example.com", "kinit user", "ipa vault-retrieve my_vault --out secret_exported.txt -------------------------------------- Retrieved data from vault \"my_vault\" --------------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-idm-user-vaults-storing-and-retrieving-secrets_configuring-and-managing-idm
15.8. Updating a Self-Hosted Engine
15.8. Updating a Self-Hosted Engine To update a self-hosted engine from your current version of 4.3 to the latest version of 4.3, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions. Enabling Global Maintenance Mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: Confirm that the environment is in maintenance mode before proceeding: You should see a message indicating that the cluster is in maintenance mode. Updating the Red Hat Virtualization Manager Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Log in to the Manager virtual machine. Check if updated packages are available: Update the setup packages: Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If any kernel packages were updated: Disable global maintenance mode Reboot the machine to complete the update. Related Information Disabling Global Maintenance Mode Disabling Global Maintenance Mode Procedure Log in to the Manager virtual machine and shut it down. Log in to one of the self-hosted engine nodes and disable global maintenance mode: When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start. Confirm that the environment is running: The listed information includes Engine Status . The value for Engine status should be: Note When the virtual machine is still booting and the Manager hasn't started yet, the Engine status is: If this happens, wait a few minutes and try again.
[ "hosted-engine --set-maintenance --mode=global", "hosted-engine --vm-status", "engine-upgrade-check", "yum update ovirt\\*setup\\* rh\\*vm-setup-plugins", "engine-setup", "Execution of setup completed successfully", "yum update", "hosted-engine --set-maintenance --mode=none", "hosted-engine --vm-status", "{\"health\": \"good\", \"vm\": \"up\", \"detail\": \"Up\"}", "{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Powering up\"}" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/updating_a_self-hosted_engine_she_admin
Machine management
Machine management OpenShift Container Platform 4.9 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: spotMarketOptions: {}", "providerSpec: placement: tenancy: dedicated", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100", "providerSpec: value: spotVMOptions: {}", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: preemptible: true", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "providerSpec: value: # disks: - type: # encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: True", "networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: []", "openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A", "oc get machineset -o yaml", "oc delete machineset <machineset-name>", "oc get nodes", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "aws cloudformation describe-stacks --stack-name <name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/machine_management/index
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster with an available release image. 5.1. Additional resources Preparing to install with the Agent-based Installer
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/installing-vsphere-agent-based-installer
Chapter 13. What huge pages do and how they are consumed by applications
Chapter 13. What huge pages do and how they are consumed by applications 13.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Container Platform, applications in a pod can allocate and consume pre-allocated huge pages. 13.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 13.3. Consuming huge pages resources using the Downward API You can use the Downward API to inject information about the huge pages resources that are consumed by a container. You can inject the resource allocation as environment variables, a volume plugin, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes. Procedure Create a hugepages-volume-pod.yaml file that is similar to the following example: apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ "IPC_LOCK" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: "1Gi" cpu: "1" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: "hugepages_1G_request" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the REQUESTS_HUGEPAGES_1GI environment variable. <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the file /etc/podinfo/hugepages_1G_request . Create the pod from the hugepages-volume-pod.yaml file: USD oc create -f hugepages-volume-pod.yaml Verification Check the value of the REQUESTS_HUGEPAGES_1GI environment variable: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- env | grep REQUESTS_HUGEPAGES_1GI Example output REQUESTS_HUGEPAGES_1GI=2147483648 Check the value of the /etc/podinfo/hugepages_1G_request file: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- cat /etc/podinfo/hugepages_1G_request Example output 2 Additional resources Allowing containers to consume Downward API objects 13.4. Configuring huge pages at boot time Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 13.5. Disabling Transparent Huge Pages Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. The following steps describe how to disable THP using the Node Tuning Operator (NTO). Procedure Create a file with the following content and name it thp-disable-tuned.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker Create the Tuned object: USD oc create -f thp-disable-tuned.yaml Check the list of active profiles: USD oc get profile -n openshift-cluster-node-tuning-operator Verification Log in to one of the nodes and do a regular THP check to verify if the nodes applied the profile successfully: USD cat /sys/kernel/mm/transparent_hugepage/enabled Example output always madvise [never]
[ "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi", "oc create -f hugepages-volume-pod.yaml", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI", "REQUESTS_HUGEPAGES_1GI=2147483648", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request", "2", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker", "oc create -f thp-disable-tuned.yaml", "oc get profile -n openshift-cluster-node-tuning-operator", "cat /sys/kernel/mm/transparent_hugepage/enabled", "always madvise [never]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed
Chapter 3. GSettings and dconf
Chapter 3. GSettings and dconf One of the major changes in Red Hat Enterprise Linux 7 is the transition from GConf (for storing user preferences) to the combination of the GSettings high-level configuration system and the dconf back end. GConf As mentioned above, the GConf configuration system has been replaced by two systems: the GSettings API, and the dconf back end which serves as a low-level configuration system and program that collects system hardware and software configuration details in a single compact binary format. Both the gsettings command-line tool and the dconf utility are used to view and change user settings. The gsettings utility does so directly in the terminal, while the dconf utility uses the dconf-editor GUI for editing a configuration database. See Chapter 9, Configuring Desktop with GSettings and dconf for more information on dconf-editor and the gsettings utility. gconftool The gconftool-2 tool has been replaced by gsettings and dconf . Likewise, gconf-editor has been replaced by dconf-editor . Overriding The concept of keyfiles has been introduced in Red Hat Enterprise Linux 7: the dconf utility allows the system administrator to override the default settings by directly installing defaults overrides . For example, setting the default background for all users is now executed by using a dconf override placed in a keyfile in the keyfile directory, such as /etc/dconf/db/local.d/) . To learn more about default values and overriding, see Section 9.5, "Configuring Custom Default Values" . Locking the Settings The dconf system now allows individual settings or entire settings subpaths to be locked down to prevent user customization. For more information on how to lock settings, see Section 9.5.1, "Locking Down Specific Settings" . NFS and dconf Using the dconf utility on home directories shared over NFS requires additional configuration. See Section 9.7, "Storing User Settings Over NFS" for information on this topic. Getting More Information See Chapter 9, Configuring Desktop with GSettings and dconf for more information on using GSettings and dconf to configure user settings.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/gsettings-dconf
7.2. Configuration changes
7.2. Configuration changes As well as configuring libvirt appropriately, virt-v2v will make certain changes to a virtual machine to enable it to run on a KVM hypervisor either with or without virtIO drivers. These changes are specific to the guest operating system. The details specified here apply to supported Red Hat Enterprise Linux versions and Windows. 7.2.1. Configuration changes for Linux virtual machines Table 7.1. virt-v2v changes to Linux virtual machines Change Description Kernel Unbootable kernels (such as Xen paravirtualized kernels) will be uninstalled. No new kernel will be installed if there is a remaining kernel which supports VirtIO. If no remaining kernel supports VirtIO and the configuration file specifies a new kernel it will be installed and configured as the default. X reconfiguration If the guest has X configured, its display driver will be updated. See Table 7.2, "Configured drivers in a Linux guest" for which driver will be used. Rename block devices If reconfiguration has caused block devices to change name, these changes will be reflected in /etc/fstab . Configure device drivers Whether VirtIO or non-VirtIO drivers are configured, virt-v2v will ensure that the correct network and block drivers are specified in the modprobe configuration. initrd virt-v2v will ensure that the initrd for the default kernel supports booting the root device, whether it is using VirtIO or not. SELinux virt-v2v will initiate a relabel of the guest on the boot. This ensures that any changes it has made are correctly labeled according to the guest's local policy. virt-v2v will configure the following drivers in a Linux guest: Table 7.2. Configured drivers in a Linux guest Paravirtualized driver type Driver module Display cirrus Storage virtio_blk Network virtio_net In addition, initrd will preload the virtio_pci driver Table 7.2. Configured drivers in a Linux guest Other drivers Display cirrus Block Virtualized IDE Network Virtualized e1000
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-V2V_Guide-References-Configuration_Changes
Chapter 1. Using GNBD with Red Hat GFS
Chapter 1. Using GNBD with Red Hat GFS GNBD (Global Network Block Device) provides block-level storage access over an Ethernet LAN. GNBD components run as a client in a GFS node and as a server in a GNBD server node. A GNBD server node exports block-level storage from its local storage (either directly attached storage or SAN storage) to a GFS node. Table 1.1, "GNBD Software Subsystem Components" summarizes the GNBD software subsystems components. Table 1.1. GNBD Software Subsystem Components Software Subsystem Components Description GNBD gnbd.ko Kernel module that implements the GNBD device driver on clients. gnbd_export Command to create, export and manage GNBDs on a GNBD server. gnbd_import Command to import and manage GNBDs on a GNBD client. gnbd_serv A server daemon that allows a node to export local storage over the network. You can configure GNBD servers to work with device-mapper multipath. GNBD with device-mapper multipath allows you to configure multiple GNBD server nodes to provide redundant paths to the storage devices. The GNBD servers, in turn, present multiple storage paths to GFS nodes via redundant GNBDs. When using GNBD with device-mapper multipath, if a GNBD server node becomes unavailable, another GNBD server node can provide GFS nodes with access to storage devices. This document how to use GNBD with Red Hat GFS and consists of the following chapters: Chapter 2, Considerations for Using GNBD with Device-Mapper Multipath , which describes some of the issues you should take into account when configuring multipathed GNBD server nodes Chapter 3, GNBD Driver and Command Usage , which describes the restrictions that apply when you are running GFS on a GNBD server node Chapter 4, Running GFS on a GNBD Server Node , which describes the user commands that configure GNBD
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/ch-gnbd
Chapter 1. Image Service
Chapter 1. Image Service This chapter discusses the steps you can follow to manage images and storage in Red Hat OpenStack Platform. A virtual machine image is a file which contains a virtual disk which has a bootable operating system installed on it. Virtual machine images are supported in different formats. The following formats are available on Red Hat OpenStack Platform: RAW - Unstructured disk image format. QCOW2 - Disk format supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. ISO - Sector-by-sector copy of the data on a disk, stored in a binary file. AKI - Indicates an Amazon Kernel Image. AMI - Indicates an Amazon Machine Image. ARI - Indicates an Amazon RAMDisk Image. VDI - Disk format supported by VirtualBox virtual machine monitor and the QEMU emulator. VHD - Common disk format used by virtual machine monitors from VMware, VirtualBox, and others. VMDK - Disk format supported by many common virtual machine monitors. While ISO is not normally considered a virtual machine image format, since ISOs contain bootable filesystems with an installed operating system, you can treat them the same as you treat other virtual machine image files. To download the official Red Hat Enterprise Linux cloud images, your account must have a valid Red Hat Enterprise Linux subscription: Red Hat Enterprise Linux 8 KVM Guest Image Red Hat Enterprise Linux 7 KVM Guest Image Red Hat Enterprise Linux 6 KVM Guest Image You will be prompted to enter your Red Hat account credentials if you are not logged in to the Customer Portal. 1.1. Understanding the Image Service The following notable OpenStack Image service (glance) features are available. 1.1.1. Image Signing and Verification Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties. By taking advantage of this feature, you can: Sign an image using your private key and upload the image, the signature, and a reference to your public key certificate (the verification metadata). The Image service then verifies that the signature is valid. Create an image in the Compute service, have the Compute service sign the image, and upload the image and its verification metadata. The Image service again verifies that the signature is valid. Request a signed image in the Compute service. The Image service provides the image and its verification metadata, allowing the Compute service to validate the image before booting it. For information on image signing and verification, refer to the Validate Glance Images chapter of the Manage Secrets with OpenStack Key Manager Guide . 1.1.2. Image conversion Image conversion converts images by calling the task API while importing an image. As part of the import workflow, a plugin provides the image conversion. This plugin can be activated or deactivated based on the deployer configuration. Therefore, the deployer needs to specify the preferred format of images for the deployment. Internally, the Image service receives the bits of the image in a particular format. These bits are stored in a temporary location. The plugin is then triggered to convert the image to the target format and moved to a final destination. When the task is finished, the temporary location is deleted. As a result, the format uploaded initially is not retained by the Image service. For more information about image conversion, see Enabling image conversion . Note The conversion can be triggered only when importing an image. It does not run when uploading an image. For example: 1.1.3. Image Introspection Every image format comes with a set of metadata embedded inside the image itself. For example, a stream optimized vmdk would contain the following parameters: By introspecting this vmdk , you can easily know that the disk_type is streamOptimized , and the adapter_type is buslogic . These metadata parameters are useful for the consumer of the image. In Compute, the workflow to instantiate a streamOptimized disk is different from the one to instantiate a flat disk. This new feature allows metadata extraction. You can achieve image introspection by calling the task API while importing the image. An administrator can override metadata settings. 1.1.4. Interoperable Image Import The OpenStack Image service provides two methods for importing images using the interoperable image import workflow: web-download (default) for importing images from a URI and glance-direct for importing from a local file system. 1.1.5. Improving scalability with Image service caching Use the glance-api caching mechanism to store copies of images on your local machine and retrieve them automatically to improve scalability. With Image service caching, the glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back-end storage multiple times. Image service caching does not affect any Image service operations. To configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates, complete the following steps: Procedure In an environment file, set the value of the GlanceCacheEnabled parameter to true , which automatically sets the flavor value to keystone+cachemanagement in the glance-api.conf heat template: Include the environment file in the openstack overcloud deploy command when you redeploy the overcloud. Optional: Tune the glance_cache_pruner to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes: Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency: The size of the files that you want to cache in your environment. The amount of available file system space. The frequency at which the environment caches images. 1.1.6. Image pre-caching This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . 1.1.6.1. Configuring the default interval for periodic image pre-caching Because the Red Hat OpenStack Platform director can now pre-cache images as part of the glance-api service, you no longer require glance-registry to pre-cache images. The default periodic interval is 300 seconds. You can increase or decrease the default interval based on your requirements. Procedure Add a new interval with the ExtraConfig parameter in an environment file on the undercloud according to your requirements: Replace <300> with the number of seconds that you want as an interval to pre-cache images. After you adjust the interval in the environment file in /home/stack/templates/ , log in as the stack user and deploy the configuration: Replace <ENV_FILE> with the name of the environment file that contains the ExtraConfig settings that you added. Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide. 1.1.6.2. Using a periodic job to pre-cache an image Prerequisite To use a periodic job to pre-cache an image, you must use the glance-cache-manage command connected directly to the node where the glance_api service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api service is running, run commands on the first overcloud node, which is called controller-0 by default. Complete the following prerequisite procedure to ensure the following actions: You run commands from the correct host. You have the necessary credentials. You are running the glance-cache-manage commands from inside the glance-api container. Log in to the undercloud as the stack user and identify the provisioning IP address of controller-0 : To authenticate to the overcloud, copy the credentials that are stored in /home/stack/overcloudrc , by default, to controller-0 : Connect to controller-0 as the heat-admin user: On controller-0 as the heat-admin user, identify the IP address of the glance_api service . In the following example, the IP address is 172.25.1.105 : Because the `glance-cache-manage\` command is only available in the glance_api container, you must create a script to exec into that container where the overcloud authentication environment variables are already set. Create a script called glance_pod.sh in /home/heat-admin on controller-0 with the following contents: Source the overcloudrc file and run the glance_pod.sh script to exec into the glance_api container with the necessary environment variables to authenticate to the overcloud Controller node. Use a command such as glance image-list to verify that the container can run authenticated commands against the overcloud. Procedure As the admin user, queue an image to cache: Replace <HOST-IP> with the IP address of the Controller node where the glance-api container is running, and replace <IMAGE-ID> with the ID of the image that you want to queue. After you queue images that you want to pre-cache, the cache_images periodic job prefetches all queued images concurrently. Note Because the image cache is local to each node, if your Red Hat OpenStack Platform is deployed with HA (with 3, 5, or 7 Controllers) then you must specify the host address with the --host option when you run the glance-cache-manage command. Run the following command to view the images in the image cache: Replace <HOST-IP> with the IP address of the host in your environment. Warning When you complete this procedure, remove the overcloudrc file from the Controller node. Related information You can use additional glance-cache-manage commands for the following purposes: list-cached to list all images that are currently cached. list-queued to list all images that are currently queued for caching. queue-image to queue an image for caching. delete-cached-image to purge an image from the cache. delete-all-cached-images to remove all images from the cache. delete-queued-image to delete an image from the cache queue. delete-all-queued-images to delete all images from the cache queue. 1.2. Manage Images The OpenStack Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it away. Stored images can be used as a template to get new servers up and running quickly and more consistently than installing a server operating system and individually configuring services. 1.2.1. Creating an Image This section provides you with the steps to manually create OpenStack-compatible images in the QCOW2 format using Red Hat Enterprise Linux 7 ISO files, Red Hat Enterprise Linux 6 ISO files, or Windows ISO files. 1.2.1.1. Use a KVM Guest Image With Red Hat OpenStack Platform You can use a ready RHEL KVM guest QCOW2 image: Red Hat Enterprise Linux 8 KVM Guest Image Red Hat Enterprise Linux 7 KVM Guest Image Red Hat Enterprise Linux 6 KVM Guest Image These images are configured with cloud-init and must take advantage of ec2-compatible metadata services for provisioning SSH keys in order to function properly. Ready Windows KVM guest QCOW2 images are not available. Note For the KVM guest images: The root account in the image is disabled, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. For an OpenStack instance, it is recommended that you generate an ssh keypair from the OpenStack dashboard or command line and use that key combination to perform an SSH public authentication to the instance as root. When the instance is launched, this public key will be injected to it. You can then authenticate using the private key downloaded while creating the keypair. If you do not want to use keypairs, you can use the admin password that has been set using the Inject an admin Password Into an Instance procedure. If you want to create custom Red Hat Enterprise Linux or Windows images, see Create a Red Hat Enterprise Linux 7 Image , Create a Red Hat Enterprise Linux 6 Image , or Create a Windows Image . 1.2.1.2. Create Custom Red Hat Enterprise Linux or Windows Images Prerequisites: Linux host machine to create an image. This can be any machine on which you can install and run the Linux packages. libvirt, virt-manager (run command dnf groupinstall -y @virtualization ). This installs all packages necessary for creating a guest operating system. Libguestfs tools (run command dnf install -y libguestfs-tools-c ). This installs a set of tools for accessing and modifying virtual machine images. A Red Hat Enterprise Linux 7 or 6 ISO file (see RHEL 7.2 Binary DVD or RHEL 6.8 Binary DVD ) or a Windows ISO file. If you do not have a Windows ISO file, visit the Microsoft TechNet Evaluation Center and download an evaluation image. Text editor, if you want to change the kickstart files (RHEL only). Important If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: Note In the following procedures, all commands with the [root@host]# prompt should be run on your host machine. 1.2.1.2.1. Create a Red Hat Enterprise Linux 7 Image This section provides you with the steps to manually create an OpenStack-compatible image in the QCOW2 format using a Red Hat Enterprise Linux 7 ISO file. Start the installation using virt-install as shown below: This launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Set up the virtual machine as follows: At the initial Installer boot menu, choose the Install Red Hat Enterprise Linux 7 . X option. Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, choose Auto-detected installation media . When prompted about which type of installation destination, choose Local Standard Disks . For other storage options, choose Automatically configure partitioning . For software selection, choose Minimal Install . For network and host name, choose eth0 for network and choose a hostname for your device. The default host name is localhost.localdomain . Choose the root password. The installation process completes and the Complete! screen appears. After the installation is complete, reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so it only contains the following values: Reboot the machine. Register the machine with the Content Delivery Network. Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and under cloud_init_modules add: The resolv-conf option automatically configures the resolv.conf when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain and other options. Add the following line to /etc/sysconfig/network to avoid problems accessing the EC2 metadata service: To ensure the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/default/grub file: Run the grub2-mkconfig command: The output is as follows: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: Reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel7-cloud.qcow2 file in the location from where the command is run. The rhel7-cloud.qcow2 image file is ready to be uploaded to the Image service. For more information on uploading this image to your OpenStack deployment using the dashboard, see Upload an Image . 1.2.1.2.2. Create a Red Hat Enterprise Linux 6 Image This section provides you with the steps to manually create an OpenStack-compatible image in the QCOW2 format using a Red Hat Enterprise Linux 6 ISO file. Start the installation using virt-install : This launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Set up the virtual machines as follows: At the initial Installer boot menu, choose the Install or upgrade an existing system option. Step through the installation prompts. Accept the defaults. The installer checks for the disc and lets you decide whether you want to test your installation media before installation. Select OK to run the test or Skip to proceed without testing. Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, choose Basic Storage Devices . Choose a hostname for your device. The default host name is localhost.localdomain . Set timezone and root password. Based on the space on the disk, choose the type of installation. Choose the Basic Server install, which installs an SSH server. The installation process completes and Congratulations, your Red Hat Enterprise Linux installation is complete screen appears. Reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so it only contains the following values: Reboot the machine. Register the machine with the Content Delivery Network: Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and under cloud_init_modules add: The resolv-conf option automatically configures the resolv.conf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create the /etc/udev/rules.d/75-persistent-net-generator.rules file as follows: This prevents /etc/udev/rules.d/70-persistent-net.rules file from being created. If /etc/udev/rules.d/70-persistent-net.rules is created, networking may not function properly when booting from snapshots (the network interface is created as "eth1" rather than "eth0" and IP address is not assigned). Add the following line to /etc/sysconfig/network to avoid problems accessing the EC2 metadata service: To ensure the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/grub.conf : Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: Reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel6-cloud.qcow2 file in the location from where the command is run. Note You will need to manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The rhel6-cloud.qcow2 image file is ready to be uploaded to the Image service. For more information on uploading this image to your OpenStack deployment using the dashboard, see Upload an Image 1.2.1.2.3. Create a Windows Image This section provides you with the steps to manually create an OpenStack-compatible image in the QCOW2 format using a Windows ISO file. Start the installation using virt-install as shown below: Replace the values of the virt-install parameters as follows: name - the name that the Windows guest should have. size - disk size in GB. path - the path to the Windows installation ISO file. RAM - the requested amount of RAM in MB. Note The --os-type=windows parameter ensures that the clock is set up correctly for the Windows guest, and enables its Hyper-V enlightenment features. Note that virt-install saves the guest image as /var/lib/libvirt/images/ name . qcow2 by default. If you want to keep the guest image elsewhere, change the parameter of the --disk option as follows: Replace filename with the name of the file which should store the guest image (and optionally its path); for example path=win8.qcow2,size=8 creates an 8 GB file named win8.qcow2 in the current working directory. Tip If the guest does not launch automatically, run the virt-viewer command to view the console: Installation of Windows systems is beyond the scope of this document. For instructions on how to install Windows, see the relevant Microsoft documentation. To allow the newly-installed Windows system to use the virtualized hardware, you might need to install virtio drivers . To so do, first install the virtio-win package on the host system. This package contains the virtio ISO image, which you must attach as a CD-ROM drive to the Windows guest. To install the virtio-win package you must add the virtio ISO image to the guest, and install the virtio drivers. See Installing KVM paravirtualized drivers for Windows virtual machines in the Configuring and managing virtualization guide. To complete the setup, download and execute Cloudbase-Init on the Windows system. At the end of the installation of Cloudbase-Init, select the Run Sysprep and Shutdown check boxes. The Sysprep tool makes the guest unique by generating an OS ID, which is used by certain Microsoft services. Important Red Hat does not provide technical support for Cloudbase-Init. If you encounter an issue, contact Cloudbase Solutions . When the Windows system shuts down, the name . qcow2 image file is ready to be uploaded to the Image service. For more information on uploading this image to your OpenStack deployment using the dashboard or the command line, see Upload an Image . 1.2.1.3. Use libosinfo Image Service (glance) can process libosinfo data for images, making it easier to configure the optimal virtual hardware for an instance. This can be done by adding the libosinfo-formatted operating system name to the glance image. This example specifies that the image with ID 654dbfd5-5c01-411f-8599-a27bd344d79b uses the libosinfo value of rhel7.2 : As a result, Compute will supply virtual hardware optimized for rhel7.2 whenever an instance is built using the 654dbfd5-5c01-411f-8599-a27bd344d79b image. Note For a complete list of libosinfo values, refer to the libosinfo project: https://gitlab.com/libosinfo/osinfo-db/tree/master/data/os 1.2.2. Upload an Image In the dashboard, select Project > Compute > Images . Click Create Image . Fill out the values, and click Create Image when finished. Table 1.1. Image Options Field Notes Name Name for the image. The name must be unique within the project. Description Brief description to identify the image. Image Source Image source: Image Location or Image File . Based on your selection, the field is displayed. Image Location or Image File Select Image Location option to specify the image location URL. Select Image File option to upload an image from the local disk. Format Image format (for example, qcow2). Architecture Image architecture. For example, use i686 for a 32-bit architecture or x86_64 for a 64-bit architecture. Minimum Disk (GB) Minimum disk size required to boot the image. If this field is not specified, the default value is 0 (no minimum). Minimum RAM (MB) Minimum memory size required to boot the image. If this field is not specified, the default value is 0 (no minimum). Public If selected, makes the image public to all users with access to the project. Protected If selected, ensures only users with specific permissions can delete this image. When the image has been successfully uploaded, its status is changed to active , which indicates that the image is available for use. Note that the Image service can handle even large images that take a long time to upload - longer than the lifetime of the Identity service token which was used when the upload was initiated. This is due to the fact that the Image service first creates a trust with the Identity service so that a new token can be obtained and used when the upload is complete and the status of the image is to be updated. Note You can also use the glance image-create command with the property option to upload an image. More values are available on the command line. For a complete listing, see Image Configuration Parameters . 1.2.3. Update an Image In the dashboard, select Project > Compute > Images . Click Edit Image from the dropdown list. Note The Edit Image option is available only when you log in as an admin user. When you log in as a demo user, you have the option to Launch an instance or Create Volume . Update the fields and click Update Image when finished. You can update the following values - name, description, kernel ID, ramdisk ID, architecture, format, minimum disk, minimum RAM, public, protected. Click the drop-down menu and select Update Metadata option. Specify metadata by adding items from the left column to the right one. In the left column, there are metadata definitions from the Image Service Metadata Catalog. Select Other to add metadata with the key of your choice and click Save when finished. Note You can also use the glance image-update command with the property option to update an image. More values are available on the command line; for a complete listing, see Image Configuration Parameters . 1.2.4. Import an Image You can import images into the Image service (glance) using web-download to import an image from a URI and glance-direct to import an image from a local file system. Both options are enabled by default. Import methods are configured by the cloud administrator. Run the glance import-info command to list available import options. 1.2.4.1. Import from a Remote URI You can use the web-download method to copy an image from a remote URI. Create an image and specify the URI of the image to import. You can monitor the image's availability using the glance image-show <image-ID> command where the ID is the one provided during image creation. The Image service web-download method uses a two-stage process to perform the import. First, it creates an image record. Second, it retrieves the image the specified URI. This method provides a more secure way to import images than the deprecated copy-from method used in Image API v1. The URI is subject to optional blacklist and whitelist filtering as described in the Advanced Overcloud Customization Guide. The Image Property Injection plugin may inject metadata properties to the image as described in the Advanced Overcloud Customization Guide. These injected properties determine which compute nodes the image instances are launched on. 1.2.4.2. Import from a Local Volume The glance-direct method creates an image record, which generates an image ID. Once the image is uploaded to the service from a local volume, it is stored in a staging area and is made active after it passes any configured checks. The glance-direct method requires a shared staging area when used in a highly available (HA) configuration. Note Image uploads using the glance-direct method fail in an HA environment if a common staging area is not present. In an HA active-active environment, API calls are distributed to the glance controllers. The download API call could be sent to a different controller than the API call to upload the image. For more information about configuring the staging area, refer to the Storage Configuration section in the Advanced Overcloud Customization Guide . The glance-direct method uses three different calls to import an image: glance image-create glance image-stage glance image-import You can use the glance image-create-via-import command to perform all three of these calls in one command. In the example below, uppercase words should be replaced with the appropriate options. Once the image moves from the staging area to the back end location, the image is listed. However, it may take some time for the image to become active. You can monitor the image's availability using the glance image-show <image-ID> command where the ID is the one provided during image creation. 1.2.5. Delete an Image In the dashboard, select Project > Compute > Images . Select the image you want to delete and click Delete Images . 1.2.6. Hide or Unhide an Image You can hide public images from normal listings presented to users. For instance, you can hide obsolete CentOS 7 images and show only the latest version to simplify the user experience. Users can discover and use hidden images. To hide an image: To create a hidden image, add the --hidden argument to the glance image-create command. To unhide an image: 1.2.7. Show Hidden Images To list hidden images: 1.2.8. Enabling image conversion With the GlanceImageImportPlugins parameter enabled, you can upload a QCOW2 image, and the Image service will convert it to RAW. Note Image conversion is automatically enabled when you use Red Hat Ceph Storage RBD to store images and boot Nova instances. To enable image conversion, create an environment file that contains the following parameter value and include the new environment file with the -e option in the openstack overcloud deploy command: 1.2.9. Converting an image to RAW format Red Hat Ceph Storage can store, but does not support using, QCOW2 images to host virtual machine (VM) disks. When you upload a QCOW2 image and create a VM from it, the compute node downloads the image, converts the image to RAW, and uploads it back into Ceph, which can then use it. This process affects the time it takes to create VMs, especially during parallel VM creation. For example, when you create multiple VMs simultaneously, uploading the converted image to the Ceph cluster may impact already running workloads. The upload process can starve those workloads of IOPS and impede storage responsiveness. To boot VMs in Ceph more efficiently (ephemeral back end or boot from volume), the glance image format must be RAW. Converting an image to RAW may yield an image that is larger in size than the original QCOW2 image file. Run the following command before the conversion to determine the final RAW image size: To convert an image from QCOW2 to RAW format, do the following: 1.2.9.1. Configuring Image Service to accept RAW and ISO only Optionally, to configure the Image Service to accept only RAW and ISO image formats, deploy using an additional environment file that contains the following: 1.2.10. Storing an image in RAW format With the GlanceImageImportPlugins parameter enabled, run the following command to store a previously created image in RAW format: For --name , replace NAME with the name of the image; this is the name that will appear in glance image-list . For --uri , replace http://server/image.qcow2 with the location and file name of the QCOW2 image. Note This command example creates the image record and imports it by using the web-download method. The glance-api downloads the image from the --uri location during the import process. If web-download is not available, glanceclient cannot automatically download the image data. Run the glance import-info command to list the available image import methods.
[ "glance image-create-via-import --disk-format qcow2 --container-format bare --name NAME --visibility public --import-method web-download --uri http://server/image.qcow2", "head -20 so-disk.vmdk Disk DescriptorFile version=1 CID=d5a0bce5 parentCID=ffffffff createType=\"streamOptimized\" Extent description RDONLY 209714 SPARSE \"generated-stream.vmdk\" The Disk Data Base #DDB ddb.adapterType = \"buslogic\" ddb.geometry.cylinders = \"102\" ddb.geometry.heads = \"64\" ddb.geometry.sectors = \"32\" ddb.virtualHWVersion = \"4\"", "parameter_defaults: GlanceCacheEnabled: true", "parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'", "parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'", "openstack overcloud deploy --templates -e /home/stack/templates/<ENV_FILE>.yaml", "ssh stack@undercloud-0 [stack@undercloud-0 ~]USD source ~/overcloudrc (overcloud) [stack@undercloud-0 ~]USD openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (overcloud) [stack@undercloud-0 ~]USD", "(overcloud) [stack@undercloud-0 ~]USD scp ~/overcloudrc [email protected]:/home/heat-admin/", "(overcloud) [stack@undercloud-0 ~]USD ssh [email protected]", "[heat-admin@controller-0 ~]USD sudo grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2", "sudo podman exec -ti -e NOVA_VERSION=USDNOVA_VERSION -e COMPUTE_API_VERSION=USDCOMPUTE_API_VERSION -e OS_USERNAME=USDOS_USERNAME -e OS_PROJECT_NAME=USDOS_PROJECT_NAME -e OS_USER_DOMAIN_NAME=USDOS_USER_DOMAIN_NAME -e OS_PROJECT_DOMAIN_NAME=USDOS_PROJECT_DOMAIN_NAME -e OS_NO_CACHE=USDOS_NO_CACHE -e OS_CLOUDNAME=USDOS_CLOUDNAME -e no_proxy=USDno_proxy -e OS_AUTH_TYPE=USDOS_AUTH_TYPE -e OS_PASSWORD=USDOS_PASSWORD -e OS_AUTH_URL=USDOS_AUTH_URL -e OS_IDENTITY_API_VERSION=USDOS_IDENTITY_API_VERSION -e OS_COMPUTE_API_VERSION=USDOS_COMPUTE_API_VERSION -e OS_IMAGE_API_VERSION=USDOS_IMAGE_API_VERSION -e OS_VOLUME_API_VERSION=USDOS_VOLUME_API_VERSION -e OS_REGION_NAME=USDOS_REGION_NAME glance_api /bin/bash", "[heat-admin@controller-0 ~]USD source overcloudrc (overcloudrc) [heat-admin@controller-0 ~]USD bash glance_pod.sh ()[glance@controller-0 /]USD", "()[glance@controller-0 /]USD glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]USD", "()[glance@controller-0 /]USD glance-cache-manage --host=<HOST-IP> queue-image <IMAGE-ID>", "()[glance@controller-0 /]USD glance-cache-manage --host=<HOST-IP> list-cached", "sudo systemctl disable --now iscsid.socket", "qemu-img create -f qcow2 rhel7.qcow2 8G virt-install --virt-type kvm --name rhel7 --ram 2048 --cdrom /tmp/rhel-server-7.2-x86_64-dvd.iso --disk rhel7.qcow2,format=qcow2 --network=bridge:virbr0 --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant=rhel7", "virt-viewer rhel7", "TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no", "sudo subscription-manager register sudo subscription-manager attach --pool=Valid-Pool-Number-123456 sudo subscription-manager repos --enable=rhel-7-server-rpms", "dnf -y update", "dnf install -y cloud-utils-growpart cloud-init", "- resolv-conf", "NOZEROCONF=yes", "GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-229.7.2.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-229.7.2.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-121.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-121.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-b82a3044fb384a3f9aeacf883474428b Found initrd image: /boot/initramfs-0-rescue-b82a3044fb384a3f9aeacf883474428b.img done", "subscription-manager repos --disable=* subscription-manager unregister dnf clean all", "poweroff", "virt-sysprep -d rhel7", "virt-sparsify --compress /tmp/rhel7.qcow2 rhel7-cloud.qcow2", "qemu-img create -f qcow2 rhel6.qcow2 4G virt-install --connect=qemu:///system --network=bridge:virbr0 --name=rhel6 --os-type linux --os-variant rhel6 --disk path=rhel6.qcow2,format=qcow2,size=10,cache=none --ram 4096 --vcpus=2 --check-cpu --accelerate --hvm --cdrom=rhel-server-6.8-x86_64-dvd.iso", "virt-viewer rhel6", "TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no", "sudo subscription-manager register sudo subscription-manager attach --pool=Valid-Pool-Number-123456 sudo subscription-manager repos --enable=rhel-6-server-rpms", "dnf -y update", "dnf install -y cloud-utils-growpart cloud-init", "- resolv-conf", "echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules", "NOZEROCONF=yes", "console=tty0 console=ttyS0,115200n8", "subscription-manager repos --disable=* subscription-manager unregister dnf clean all", "poweroff", "virt-sysprep -d rhel6", "virt-sparsify --compress rhel6.qcow2 rhel6-cloud.qcow2", "virt-install --name= name --disk size= size --cdrom= path --os-type=windows --network=bridge:virbr0 --graphics spice --ram= RAM", "--disk path= filename ,size= size", "virt-viewer name", "openstack image set 654dbfd5-5c01-411f-8599-a27bd344d79b --property os_name=rhel7.2", "glance image-create --uri <URI>", "glance image-create-via-import --container-format FORMAT --disk-format DISKFORMAT --name NAME --file /PATH/TO/IMAGE", "glance image-update <image-id> --hidden 'true'", "glance image-update <image-id> --hidden 'false'", "glance image-list --hidden 'true'", "parameter_defaults: GlanceImageImportPlugins:'image_conversion'", "qemu-img info <image>.qcow2", "qemu-img convert -p -f qcow2 -O raw <original qcow2 image>.qcow2 <new raw image>.raw", "parameter_defaults: ExtraConfig: glance::config::api_config: image_format/disk_formats: value: \"raw,iso\"", "glance image-create-via-import --disk-format qcow2 --container-format bare --name NAME --visibility public --import-method web-download --uri http://server/image.qcow2" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/instances_and_images_guide/ch-image-service
Chapter 6. Additional Resources
Chapter 6. Additional Resources This chapter provides references to other relevant sources of information about Red Hat Software Collections 3.6 and Red Hat Enterprise Linux. 6.1. Red Hat Product Documentation The following documents are directly or indirectly relevant to this book: Red Hat Software Collections 3.6 Packaging Guide - The Packaging Guide for Red Hat Software Collections explains the concept of Software Collections, documents the scl utility, and provides a detailed explanation of how to create a custom Software Collection or extend an existing one. Red Hat Developer Toolset 10.0 Release Notes - The Release Notes for Red Hat Developer Toolset document known problems, possible issues, changes, and other important information about this Software Collection. Red Hat Developer Toolset 10.0 User Guide - The User Guide for Red Hat Developer Toolset contains more information about installing and using this Software Collection. Using Red Hat Software Collections Container Images - This book provides information on how to use container images based on Red Hat Software Collections. The available container images include applications, daemons, databases, as well as the Red Hat Developer Toolset container images. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. Getting Started with Containers - This guide contains a comprehensive overview of information about building and using container images on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. Using and Configuring Red Hat Subscription Manager - The Using and Configuring Red Hat Subscription Manager book provides detailed information on how to register Red Hat Enterprise Linux systems, manage subscriptions, and view notifications for the registered systems. Red Hat Enterprise Linux 6 Deployment Guide - The Deployment Guide for Red Hat Enterprise Linux 6 provides relevant information regarding the deployment, configuration, and administration of this system. Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 provides information on deployment, configuration, and administration of this system. 6.2. Red Hat Developers Red Hat Developer Program - The Red Hat Developers community portal. Overview of Red Hat Software Collections on Red Hat Developers - The Red Hat Developers portal provides a number of tutorials to get you started with developing code using different development technologies. This includes the Node.js, Perl, PHP, Python, and Ruby Software Collections. Red Hat Developer Blog - The Red Hat Developer Blog contains up-to-date information, best practices, opinion, product and program announcements as well as pointers to sample code and other resources for those who are designing and developing applications based on Red Hat technologies.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/chap-additional_resources
Chapter 2. Setting up Maven locally
Chapter 2. Setting up Maven locally Typical Fuse application development uses Maven to build and manage projects. The following topics describe how to set up Maven locally: Section 2.1, "Preparing to set up Maven" Section 2.2, "Adding Red Hat repositories to Maven" Section 2.3, "Using local Maven repositories" Section 2.4, "Setting Maven mirror using environmental variables or system properties" Section 2.5, "About Maven artifacts and coordinates" 2.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download the latest version of Maven from the Maven download page . Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 2.3, "Using local Maven repositories" . 2.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 2.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 2.4. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 2.4.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 2.4.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 2.4.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 2.4.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 2.5. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element.
[ "<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>", "mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project", "<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>", "groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version", "<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>", "<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_jboss_eap/set-up-maven-locally
OAuth APIs
OAuth APIs OpenShift Container Platform 4.12 Reference guide for Oauth APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/oauth_apis/index
Appendix B. Creating a complete permission table
Appendix B. Creating a complete permission table Use the Satellite CLI to create a permission table. Procedure Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table.
[ "foreman-rake console", "f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)", "<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>", "</table>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_satellite_rest_api/creating_a_complete_permission_table_rest-api
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository . 2.1. Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.4.2 or later and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 12 GB for OpenShift Container Platform 4.13 release images, or about 358 GB for OpenShift Container Platform 4.13 release images and OpenShift Container Platform 4.13 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization's needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space. 2.2. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with preconfigured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 2.2.1. Mirror registry for Red Hat OpenShift limitations The following limitations apply to the mirror registry for Red Hat OpenShift : The mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. It is not intended to replace Red Hat Quay or the internal image registry for OpenShift Container Platform. The mirror registry for Red Hat OpenShift is not intended to be a substitute for a production deployment of Red Hat Quay. The mirror registry for Red Hat OpenShift is only supported for hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Note Because the mirror registry for Red Hat OpenShift uses local storage, you should remain aware of the storage usage consumed when mirroring images and use Red Hat Quay's garbage collection feature to mitigate potential issues. For more information about this feature, see "Red Hat Quay garbage collection". Support for Red Hat product images that are pushed to the mirror registry for Red Hat OpenShift for bootstrapping purposes are covered by valid subscriptions for each respective product. A list of exceptions to further enable the bootstrap experience can be found on the Self-managed Red Hat OpenShift sizing and subscription guide . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. 2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.4. Updating mirror registry for Red Hat OpenShift from a local host This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures new features, bug fixes, and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a local host. Procedure If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y, and your installation directory is the default at /etc/quay-install , you can enter the following command: USD sudo ./mirror-registry upgrade -v Note mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and /etc/quay-install data to the new USDHOME/quay-install location. This allows you to use mirror registry for Red Hat OpenShift without the --quayRoot flag during future upgrades. Users who upgrade mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and you used a custom quay configuration and storage directory in your 1.y deployment, you must pass in the --quayRoot and --quayStorage flags. For example: USD sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v 2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.6. Updating mirror registry for Red Hat OpenShift from a remote host This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a remote host. Procedure To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage 2.7. Replacing mirror registry for Red Hat OpenShift SSL/TLS certificates In some cases, you might want to update your SSL/TLS certificates for the mirror registry for Red Hat OpenShift . This is useful in the following scenarios: If you are replacing the current mirror registry for Red Hat OpenShift certificate. If you are using the same certificate as the mirror registry for Red Hat OpenShift installation. If you are periodically updating the mirror registry for Red Hat OpenShift certificate. Use the following procedure to replace mirror registry for Red Hat OpenShift SSL/TLS certificates. Prerequisites You have downloaded the ./mirror-registry binary from the OpenShift console Downloads page. Procedure Enter the following command to install the mirror registry for Red Hat OpenShift : USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> This installs the mirror registry for Red Hat OpenShift to the USDHOME/quay-install directory. Prepare a new certificate authority (CA) bundle and generate new ssl.key and ssl.crt key files. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . Assign /USDHOME/quay-install an environment variable, for example, QUAY , by entering the following command: USD export QUAY=/USDHOME/quay-install Copy the new ssl.crt file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.crt USDQUAY/quay-config Copy the new ssl.key file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.key USDQUAY/quay-config Restart the quay-app application pod by entering the following command: USD systemctl --user restart quay-app 2.8. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 2.9. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --no-color , -c Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayStorage The folder where Quay persistent storage data is saved. Defaults to the quay-storage Podman volume. Root privileges are required to uninstall. --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Defaults to USDHOME/quay-install if left unspecified. --sqliteStorage The folder where SQLite database data is saved. Defaults to sqlite-storage Podman volume if not specified. Root is required to uninstall. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. Additionally, the --quayHostname flag does not support installation with an IP address. Installation with a hostname is required. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. 2.10. Mirror registry for Red Hat OpenShift release notes The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform. 2.10.1. Mirror registry for Red Hat OpenShift 2.0 release notes The following sections provide details for each 2.0 release of the mirror registry for Red Hat OpenShift. 2.10.1.1. Mirror registry for Red Hat OpenShift 2.0.5 Issued: 13 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0298 - mirror registry for Red Hat OpenShift 2.0.5 2.10.1.2. Mirror registry for Red Hat OpenShift 2.0.4 Issued: 06 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0033 - mirror registry for Red Hat OpenShift 2.0.4 2.10.1.3. Mirror registry for Red Hat OpenShift 2.0.3 Issued: 25 November 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:10181 - mirror registry for Red Hat OpenShift 2.0.3 2.10.1.4. Mirror registry for Red Hat OpenShift 2.0.2 Issued: 31 October 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:8370 - mirror registry for Red Hat OpenShift 2.0.2 2.10.1.5. Mirror registry for Red Hat OpenShift 2.0.1 Issued: 26 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:7070 - mirror registry for Red Hat OpenShift 2.0.1 2.10.1.6. Mirror registry for Red Hat OpenShift 2.0.0 Issued: 03 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.0. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:5277 - mirror registry for Red Hat OpenShift 2.0.0 2.10.1.6.1. New features With the release of mirror registry for Red Hat OpenShift , the internal database has been upgraded from PostgreSQL to SQLite. As a result, data is now stored on the sqlite-storage Podman volume by default, and the overall tarball size is reduced by 300 MB. New installations use SQLite by default. Before upgrading to version 2.0, see "Updating mirror registry for Red Hat OpenShift from a local host" or "Updating mirror registry for Red Hat OpenShift from a remote host" depending on your environment. A new feature flag, --sqliteStorage has been added. With this flag, you can manually set the location where SQLite database data is saved. Mirror registry for Red Hat OpenShift is now available on IBM Power and IBM Z architectures ( s390x and ppc64le ). 2.10.2. Mirror registry for Red Hat OpenShift 1.3 release notes The following sections provide details for each 1.3.z release of the mirror registry for Red Hat OpenShift 2.10.2.1. Mirror registry for Red Hat OpenShift 1.3.11 Issued: 2024-04-23 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.15. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:1758 - mirror registry for Red Hat OpenShift 1.3.11 2.10.2.2. Mirror registry for Red Hat OpenShift 1.3.10 Issued: 2023-12-07 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.14. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:7628 - mirror registry for Red Hat OpenShift 1.3.10 2.10.2.3. Mirror registry for Red Hat OpenShift 1.3.9 Issued: 2023-09-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.12. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:5241 - mirror registry for Red Hat OpenShift 1.3.9 2.10.2.4. Mirror registry for Red Hat OpenShift 1.3.8 Issued: 2023-08-16 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.11. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4622 - mirror registry for Red Hat OpenShift 1.3.8 2.10.2.5. Mirror registry for Red Hat OpenShift 1.3.7 Issued: 2023-07-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4087 - mirror registry for Red Hat OpenShift 1.3.7 2.10.2.6. Mirror registry for Red Hat OpenShift 1.3.6 Issued: 2023-05-30 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3302 - mirror registry for Red Hat OpenShift 1.3.6 2.10.2.7. Mirror registry for Red Hat OpenShift 1.3.5 Issued: 2023-05-18 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3225 - mirror registry for Red Hat OpenShift 1.3.5 2.10.2.8. Mirror registry for Red Hat OpenShift 1.3.4 Issued: 2023-04-25 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1914 - mirror registry for Red Hat OpenShift 1.3.4 2.10.2.9. Mirror registry for Red Hat OpenShift 1.3.3 Issued: 2023-04-05 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1528 - mirror registry for Red Hat OpenShift 1.3.3 2.10.2.10. Mirror registry for Red Hat OpenShift 1.3.2 Issued: 2023-03-21 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1376 - mirror registry for Red Hat OpenShift 1.3.2 2.10.2.11. Mirror registry for Red Hat OpenShift 1.3.1 Issued: 2023-03-7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1086 - mirror registry for Red Hat OpenShift 1.3.1 2.10.2.12. Mirror registry for Red Hat OpenShift 1.3.0 Issued: 2023-02-20 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:0558 - mirror registry for Red Hat OpenShift 1.3.0 2.10.2.12.1. New features Mirror registry for Red Hat OpenShift is now supported on Red Hat Enterprise Linux (RHEL) 9 installations. IPv6 support is now available on mirror registry for Red Hat OpenShift local host installations. IPv6 is currently unsupported on mirror registry for Red Hat OpenShift remote host installations. A new feature flag, --quayStorage , has been added. By specifying this flag, you can manually set the location for the Quay persistent storage. A new feature flag, --pgStorage , has been added. By specifying this flag, you can manually set the location for the Postgres persistent storage. Previously, users were required to have root privileges ( sudo ) to install mirror registry for Red Hat OpenShift . With this update, sudo is no longer required to install mirror registry for Red Hat OpenShift . When mirror registry for Red Hat OpenShift was installed with sudo , an /etc/quay-install directory that contained installation files, local storage, and the configuration bundle was created. With the removal of the sudo requirement, installation files and the configuration bundle are now installed to USDHOME/quay-install . Local storage, for example Postgres and Quay, are now stored in named volumes automatically created by Podman. To override the default directories that these files are stored in, you can use the command line arguments for mirror registry for Red Hat OpenShift . For more information about mirror registry for Red Hat OpenShift command line arguments, see " Mirror registry for Red Hat OpenShift flags". 2.10.2.12.2. Bug fixes Previously, the following error could be returned when attempting to uninstall mirror registry for Red Hat OpenShift : ["Error: no container with name or ID \"quay-postgres\" found: no such container"], "stdout": "", "stdout_lines": [] * . With this update, the order that mirror registry for Red Hat OpenShift services are stopped and uninstalled have been changed so that the error no longer occurs when uninstalling mirror registry for Red Hat OpenShift . For more information, see PROJQUAY-4629 . 2.10.3. Mirror registry for Red Hat OpenShift 1.2 release notes The following sections provide details for each 1.2.z release of the mirror registry for Red Hat OpenShift 2.10.3.1. Mirror registry for Red Hat OpenShift 1.2.9 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7369 - mirror registry for Red Hat OpenShift 1.2.9 2.10.3.2. Mirror registry for Red Hat OpenShift 1.2.8 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.9. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7065 - mirror registry for Red Hat OpenShift 1.2.8 2.10.3.3. Mirror registry for Red Hat OpenShift 1.2.7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6500 - mirror registry for Red Hat OpenShift 1.2.7 2.10.3.3.1. Bug fixes Previously, getFQDN() relied on the fully-qualified domain name (FQDN) library to determine its FQDN, and the FQDN library tried to read the /etc/hosts folder directly. Consequently, on some Red Hat Enterprise Linux CoreOS (RHCOS) installations with uncommon DNS configurations, the FQDN library would fail to install and abort the installation. With this update, mirror registry for Red Hat OpenShift uses hostname to determine the FQDN. As a result, the FQDN library does not fail to install. ( PROJQUAY-4139 ) 2.10.3.4. Mirror registry for Red Hat OpenShift 1.2.6 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6278 - mirror registry for Red Hat OpenShift 1.2.6 2.10.3.4.1. New features A new feature flag, --no-color ( -c ) has been added. This feature flag allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. 2.10.3.5. Mirror registry for Red Hat OpenShift 1.2.5 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6071 - mirror registry for Red Hat OpenShift 1.2.5 2.10.3.6. Mirror registry for Red Hat OpenShift 1.2.4 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5884 - mirror registry for Red Hat OpenShift 1.2.4 2.10.3.7. Mirror registry for Red Hat OpenShift 1.2.3 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5649 - mirror registry for Red Hat OpenShift 1.2.3 2.10.3.8. Mirror registry for Red Hat OpenShift 1.2.2 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5501 - mirror registry for Red Hat OpenShift 1.2.2 2.10.3.9. Mirror registry for Red Hat OpenShift 1.2.1 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.1 2.10.3.10. Mirror registry for Red Hat OpenShift 1.2.0 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.0 2.10.3.10.1. Bug fixes Previously, all components and workers running inside of the Quay pod Operator had log levels set to DEBUG . As a result, large traffic logs were created that consumed unnecessary space. With this update, log levels are set to WARN by default, which reduces traffic information while emphasizing problem scenarios. ( PROJQUAY-3504 ) 2.10.4. Mirror registry for Red Hat OpenShift 1.1 release notes The following section provides details 1.1.0 release of the mirror registry for Red Hat OpenShift 2.10.4.1. Mirror registry for Red Hat OpenShift 1.1.0 The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:0956 - mirror registry for Red Hat OpenShift 1.1.0 2.10.4.1.1. New features A new command, mirror-registry upgrade has been added. This command upgrades all container images without interfering with configurations or data. Note If quayRoot was previously set to something other than default, it must be passed into the upgrade command. 2.10.4.1.2. Bug fixes Previously, the absence of quayHostname or targetHostname did not default to the local hostname. With this update, quayHostname and targetHostname now default to the local hostname if they are missing. ( PROJQUAY-3079 ) Previously, the command ./mirror-registry --version returned an unknown flag error. Now, running ./mirror-registry --version returns the current version of the mirror registry for Red Hat OpenShift . ( PROJQUAY-3086 ) Previously, users could not set a password during installation, for example, when running ./mirror-registry install --initUser <user_name> --initPassword <password> --verbose . With this update, users can set a password during installation. ( PROJQUAY-3149 ) Previously, the mirror registry for Red Hat OpenShift did not recreate pods if they were destroyed. Now, pods are recreated if they are destroyed. ( PROJQUAY-3261 ) 2.11. Troubleshooting mirror registry for Red Hat OpenShift To assist in troubleshooting mirror registry for Red Hat OpenShift , you can gather logs of systemd services installed by the mirror registry. The following services are installed: quay-app.service quay-postgres.service quay-redis.service quay-pod.service Prerequisites You have installed mirror registry for Red Hat OpenShift . Procedure If you installed mirror registry for Red Hat OpenShift with root privileges, you can get the status information of its systemd services by entering the following command: USD sudo systemctl status <service> If you installed mirror registry for Red Hat OpenShift as a standard user, you can get the status information of its systemd services by entering the following command: USD systemctl --user status <service> 2.12. Additional resources Red Hat Quay garbage collection Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v", "sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl --user restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "sudo systemctl status <service>", "systemctl --user status <service>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/disconnected_installation_mirroring/installing-mirroring-creating-registry
23.6. Partition and File System Tools
23.6. Partition and File System Tools This section describes how different partition and file system management tools interact with a device's I/O parameters. util-linux-ng's libblkid and fdisk The libblkid library provided with the util-linux-ng package includes a programmatic API to access a device's I/O parameters. libblkid allows applications, especially those that use Direct I/O, to properly size their I/O requests. The fdisk utility from util-linux-ng uses libblkid to determine the I/O parameters of a device for optimal placement of all partitions. The fdisk utility will align all partitions on a 1MB boundary. parted and libparted The libparted library from parted also uses the I/O parameters API of libblkid . Anaconda , the Red Hat Enterprise Linux 7 installer, uses libparted , which means that all partitions created by either the installer or parted will be properly aligned. For all partitions created on a device that does not appear to provide I/O parameters, the default alignment will be 1MB. The heuristics parted uses are as follows: Always use the reported alignment_offset as the offset for the start of the first primary partition. If optimal_io_size is defined (i.e. not 0 ), align all partitions on an optimal_io_size boundary. If optimal_io_size is undefined (i.e. 0 ), alignment_offset is 0 , and minimum_io_size is a power of 2, use a 1MB default alignment. This is the catch-all for "legacy" devices which don't appear to provide I/O hints. As such, by default all partitions will be aligned on a 1MB boundary. Note Red Hat Enterprise Linux 7 cannot distinguish between devices that don't provide I/O hints and those that do so with alignment_offset=0 and optimal_io_size=0 . Such a device might be a single SAS 4K device; as such, at worst 1MB of space is lost at the start of the disk. File System Tools The different mkfs. filesystem utilities have also been enhanced to consume a device's I/O parameters. These utilities will not allow a file system to be formatted to use a block size smaller than the logical_block_size of the underlying storage device. Except for mkfs.gfs2 , all other mkfs. filesystem utilities also use the I/O hints to layout on-disk data structure and data areas relative to the minimum_io_size and optimal_io_size of the underlying storage device. This allows file systems to be optimally formatted for various RAID (striped) layouts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/iolimpartitionfstools
Using systemd unit files to customize and optimize your system
Using systemd unit files to customize and optimize your system Red Hat Enterprise Linux 9 Optimize system performance and extend configuration with systemd Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_systemd_unit_files_to_customize_and_optimize_your_system/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/packaging_red_hat_build_of_openjdk_17_applications_in_containers/making-open-source-more-inclusive
Chapter 16. Configuring a multi-site, fault-tolerant messaging system using broker connections
Chapter 16. Configuring a multi-site, fault-tolerant messaging system using broker connections Large-scale enterprise messaging systems commonly have discrete broker clusters located in geographically distributed data centers. In the event of a data center outage, system administrators might need to preserve existing messaging data and ensure that client applications can continue to produce and consume messages. You can use broker connections to ensure continuity of your messaging system during a data center outage. This type of solution is called a multi-site, fault-tolerant architecture . Note Only the AMQP protocol is supported for communication between brokers for broker connections. A client can use any supported protocol. Currently, messages are converted to AMQP through the mirroring process. The following sections explain how to protect your messaging system from data center outages using broker connections: Section 16.1, "About broker connections" Section 16.2, "Configuring broker connections" Note Multi-site fault tolerance is not a replacement for high-availability (HA) broker redundancy within data centers. Broker redundancy based on live-backup groups provides automatic protection against single broker failures within single clusters. In contrast, multi-site fault tolerance protects against large-scale data center outages. 16.1. About broker connections With broker connections, a broker can establish a connection to another broker and mirror messages to and from that broker. AMQP server connections A broker can initiate connections to other endpoints using the AMQP protocol using broker connections. This means, for example, that the broker can connect to other AMQP servers and create elements on those connections. The following types of operations are supported on an AMQP server connection: Mirrors - The broker uses an AMQP connection to another broker and duplicates messages and sends acknowledgements over the wire. Senders - Messages received on specific queues are transferred to another broker. Receivers - The broker pulls messages from another broker. Peers - The broker creates both senders and receivers on AMQ Interconnect endpoints. This chapter describes how to use broker connections to create a fault-tolerant system. See Chapter 17, Bridging brokers for information about sender, receiver, and peer options. The following events are sent through mirroring: Message sending - Messages sent to one broker will be "replicated" to the target broker. Message acknowledgement - Acknowledgements removing messages at one broker will be sent to the target broker. Queue and address creation. Queue and address deletion. Note If the message is pending for a consumer on the target mirror, the acknowledgement will not succeed and the message might be delivered by both brokers. Mirroring does not block any operation and does not affect the performance of a broker. The broker only mirrors messages arriving from the point in time the mirror was configured. Previously existing messages will not be forwarded to other brokers. 16.2. Configuring broker connections The following procedure shows how to configure broker connections to mirror messages between brokers. Only one of the brokers is active at any time, all messages are mirrored to the other broker. Prerequisites You have two working brokers. Procedure Create a broker-connections element in the broker.xml file for the first broker, for example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC1"> <mirror/> </amqp-connection> </broker-connections> <hostname> The hostname of the other broker instance. <port> The port used by the broker on the other host. All messages on the first broker will be mirrored to the second broker, but messages that existed before the mirror was created are not mirrored. You can also configure the following additional features: queue-removal : Specifies whether a queue- or address-removal event is sent. The default value is true . message-acknowledgements : Specifies whether message acknowledgements are sent. The default value is true . queue-creation : Specifies whether a queue- or address-creation event is sent. The default value is true . Note The broker connections name in the example, DC1 , is used to create a queue named USDACTIVEMQ_ARTEMIS_MIRROR_mirror . Make sure that the corresponding broker is configured to accept those messages, even though the queue is not visible on that broker. Create a broker-connections element in the broker.xml file for the second broker, for example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror/> </amqp-connection> </broker-connections> Note Red Hat recommends that consumers are configured to accept messages from one of the brokers, not both. Configure clients using the instructions documented in Section 15.6, "Configuring clients in a multi-site, fault-tolerant messaging system" , noting that with broker connections, there is no shared storage.
[ "<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC1\"> <mirror/> </amqp-connection> </broker-connections>", "<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror/> </amqp-connection> </broker-connections>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/configuring-fault-tolerant-system-broker-connections-configuring
Chapter 17. System and Subscription Management
Chapter 17. System and Subscription Management cockpit rebased to version 154 The cockpit packages, which provide the Cockpit browser-based administration console, have been upgraded to version 154. This version provides a number of bug fixes and enhancements. Notable changes include: The Accounts page now enables the configuration of account locking and password expiry. Load graphs consistently ignore loopback traffic on all networks. Cockpit provides information about unmet conditions for systemd services. Newly created timers on the Services page are now started and enabled automatically. It is possible to dynamically resize the terminal window to use all available space. Various navigation and JavaScript errors with Internet Explorer have been fixed. Cockpit uses Self-Signed Certificate Generator (SSCG) to generate SSL certificates, if available. Loading SSH keys from arbitrary paths is now supported. Absent or invalid /etc/os-release files are now handled gracefully. Unprivileged users now cannot use the shutdown/reboot button on the System page. Note that certain cockpit packages are available in the Red Hat Enterprise Linux 7 Extras channel; see https://access.redhat.com/support/policy/updates/extras . (BZ# 1470780 , BZ#1425887, BZ# 1493756 ) Users of yum-utils now can perform actions prior to transactions A new yum-plugin-pre-transaction-actions plug-in has been added to the yum-utils collection. It allows users to perform actions before a transaction starts. The usage and configuration of the plug-in are almost identical to the existing yum-plugin-post-transaction-actions plug-in. (BZ#1470647) yum can disable creation of per-user cache as a non-root user New usercache option has been added to the yum.conf(5) configuration file of the yum utility. It allows the users to disable the creation of per-user cache when yum runs as a non-root user. The reason for this change is that in some cases users do not want to create and populate per-user cache, for example in cases where the space in the USDTMPDIR directory is consumed by the user cache data. (BZ# 1432319 ) yum-builddep now allows to define RPM macros The yum-builddep utility has been enhanced to allow you to define RPM macros for a .spec file parsing. This change has been made because, in some cases, RPM macros need to be defined in order for yum-builddep to successfully parse a .spec file. Similarly to the rpm utility, the yum-builddep tool now allows you to specify RPM macros with the --define option. (BZ#1437636) subscription-manager now displays the host name upon registration Until now, the user needed to search for the effective host name for a given system, which is determined by different Satellite settings. With this update, the subscription-manager utility displays the host name upon the registration of the system. (BZ# 1463325 ) A subscription-manager plugin now runs with yum-config-manager With this update, the subscription-manager plugin runs with the yum-config-manager utility. The yum-config-manager operations now trigger redhat.repo generation, allowing Red Hat Enterprise Linux containers to enable or disable repositories without first running yum commands. (BZ# 1329349 ) subscription-manager now protects all product certificates in /etc/pki/product-default/ Previously, the subscription-manager utility only protected those product certificates provided by the redhat-release package whose tag matched rhel-# . Consequently, product certificates such as RHEL-ALT or High Touch Beta were sometimes removed from the /etc/pki/product-default/ directory by the product-id yum plugin. With this update, subscription-manager has been modified to protect all certificates in /etc/pki/product-default/ against automatic removal. (BZ# 1526622 ) rhn-migrate-classic-to-rhsm now automatically enables the subscription-manager and product-id yum plugins With this update, the rhn-migrate-classic-to-rhsm utility automatically enables the yum plugins: subscription-manager and product-id . With this update, the subscription-manager utility automatically enables the yum plugins: subscription-manager and product-id . This update benefits users of Red Hat Enterprise Linux who previously used the rhn-client-tools utility to register their systems to Red Hat Network Classic or who still use it with Satellite 5 entitlement servers, and who have temporarily disabled the yum plugins. As a result, rhn-migrate-classic-to-rhsm allows an easy transition to using the newer subscription-manager tools for entitlements. Note that running rhn-migrate-classic-to-rhsm displays a warning message indicating how to change this default behavior if it is not desirable. (BZ# 1466453 ) subscription-manager now automatically enables the subscription-manager and product-id yum plugins With this update, the subscription-manager utility automatically enables the yum plugins: subscription-manager and product-id . This update benefits users of Red Hat Enterprise Linux who previously used the rhn-client-tools utility to register their systems to Red Hat Network Classic or who still use it with Satellite 5 entitlement servers, and who have temporarily disabled the yum plugins. As a result, it is easier for users to start using the newer subscription-manager tools for entitlements. Note that running subscription-manager displays a warning message indicating how to change this default behavior if it is not desirable. (BZ# 1319927 ) subscription-manager-cockpit replaces subscription functionality in cockpit-system This update introduces a new subscription-manager-cockpit RPM. The new subscription-manager-cockpit RPM provides a new dbus-based implementation and a few fixes to the same subscriptions functionality provided by cockpit-system . If both RPMs are installed, the implementation from subscription-manager-cockpit is used. (BZ# 1499977 ) virt-who logs where the host-guest mapping is sent The virt-who utility now uses the rhsm.log file to log the owner or account to which the host-guest mapping is sent. This helps proper configuration of virt-who . (BZ# 1408556 ) virt-who now provides configuration error information The virt-who utility now checks for common virt-who configuration errors and outputs log messages that specify the configuration items that caused these errors. As a result, it is easier for a user to correct virt-who configuration errors. (BZ# 1436617 ) reposync now by default skips packages whose location falls outside the destination directory Previously, the reposync command did not sanitize paths to packages specified in a remote repository, which was insecure. A security fix for CVE-2018-10897 has changed the default behavior of reposync to not store any packages outside the specified destination directory. To restore the original insecure behavior, use the new --allow-path-traversal option. (BZ#1609302)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_system_and_subscription_management
probe::sunrpc.sched.new_task
probe::sunrpc.sched.new_task Name probe::sunrpc.sched.new_task - Create new task for the specified client Synopsis sunrpc.sched.new_task Values xid the transmission id in the RPC call prog the program number in the RPC call prot the IP protocol in the RPC call vers the program version in the RPC call tk_flags the flags of the task
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-sched-new-task
Chapter 15. Infrastructure [config.openshift.io/v1]
Chapter 15. Infrastructure [config.openshift.io/v1] Description Infrastructure holds cluster-wide information about Infrastructure. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 15.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cloudConfig object cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. platformSpec object platformSpec holds desired information specific to the underlying infrastructure provider. 15.1.2. .spec.cloudConfig Description cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 15.1.3. .spec.platformSpec Description platformSpec holds desired information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "KubeVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.4. .spec.platformSpec.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object 15.1.5. .spec.platformSpec.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.6. .spec.platformSpec.aws.serviceEndpoints Description serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.7. .spec.platformSpec.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.8. .spec.platformSpec.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object 15.1.9. .spec.platformSpec.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object 15.1.10. .spec.platformSpec.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object 15.1.11. .spec.platformSpec.external Description ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. Type object Property Type Description platformName string PlatformName holds the arbitrary string representing the infrastructure provider name, expected to be set at the installation time. This field is solely for informational and reporting purposes and is not expected to be used for decision-making. 15.1.12. .spec.platformSpec.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object 15.1.13. .spec.platformSpec.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object 15.1.14. .spec.platformSpec.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object 15.1.15. .spec.platformSpec.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Required prismCentral prismElements Property Type Description prismCentral object prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. prismElements array prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. prismElements[] object NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) 15.1.16. .spec.platformSpec.nutanix.prismCentral Description prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.17. .spec.platformSpec.nutanix.prismElements Description prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. Type array 15.1.18. .spec.platformSpec.nutanix.prismElements[] Description NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) Type object Required endpoint name Property Type Description endpoint object endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. name string name is the name of the Prism Element (cluster). This value will correspond with the cluster field configured on other resources (eg Machines, PVCs, etc). 15.1.19. .spec.platformSpec.nutanix.prismElements[].endpoint Description endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.20. .spec.platformSpec.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object 15.1.21. .spec.platformSpec.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object 15.1.22. .spec.platformSpec.powervs Description PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. 15.1.23. .spec.platformSpec.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.24. .spec.platformSpec.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.25. .spec.platformSpec.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description failureDomains array failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. failureDomains[] object VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. nodeNetworking object nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. vcenters array vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- vcenters[] object VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. 15.1.26. .spec.platformSpec.vsphere.failureDomains Description failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. Type array 15.1.27. .spec.platformSpec.vsphere.failureDomains[] Description VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. Type object Required name region server topology zone Property Type Description name string name defines the arbitrary but unique name of a failure domain. region string region defines the name of a region tag that will be attached to a vCenter datacenter. The tag category in vCenter must be named openshift-region. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- topology object Topology describes a given failure domain using vSphere constructs zone string zone defines the name of a zone tag that will be attached to a vCenter cluster. The tag category in vCenter must be named openshift-zone. 15.1.28. .spec.platformSpec.vsphere.failureDomains[].topology Description Topology describes a given failure domain using vSphere constructs Type object Required computeCluster datacenter datastore networks Property Type Description computeCluster string computeCluster the absolute path of the vCenter cluster in which virtual machine will be located. The absolute path is of the form /<datacenter>/host/<cluster>. The maximum length of the path is 2048 characters. datacenter string datacenter is the name of vCenter datacenter in which virtual machines will be located. The maximum length of the datacenter name is 80 characters. datastore string datastore is the absolute path of the datastore in which the virtual machine is located. The absolute path is of the form /<datacenter>/datastore/<datastore> The maximum length of the path is 2048 characters. folder string folder is the absolute path of the folder where virtual machines are located. The absolute path is of the form /<datacenter>/vm/<folder>. The maximum length of the path is 2048 characters. networks array (string) networks is the list of port group network names within this failure domain. Currently, we only support a single interface per RHCOS virtual machine. The available networks (port groups) can be listed using govc ls 'network/*' The single interface should be the absolute path of the form /<datacenter>/network/<portgroup>. resourcePool string resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>. The maximum length of the path is 2048 characters. 15.1.29. .spec.platformSpec.vsphere.nodeNetworking Description nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. Type object Property Type Description external object external represents the network configuration of the node that is externally routable. internal object internal represents the network configuration of the node that is routable only within the cluster. 15.1.30. .spec.platformSpec.vsphere.nodeNetworking.external Description external represents the network configuration of the node that is externally routable. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.31. .spec.platformSpec.vsphere.nodeNetworking.internal Description internal represents the network configuration of the node that is routable only within the cluster. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.32. .spec.platformSpec.vsphere.vcenters Description vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- Type array 15.1.33. .spec.platformSpec.vsphere.vcenters[] Description VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. Type object Required datacenters server Property Type Description datacenters array (string) The vCenter Datacenters in which the RHCOS vm guests are located. This field will be used by the Cloud Controller Manager. Each datacenter listed here should be used within a topology. port integer port is the TCP port that will be used to communicate to the vCenter endpoint. When omitted, this means the user has no opinion and it is up to the platform to choose a sensible default, which is subject to change over time. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- 15.1.34. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description apiServerInternalURI string apiServerInternalURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerInternalURL can be used by components like kubelets, to contact the Kubernetes API server using the infrastructure provider rather than Kubernetes networking. apiServerURL string apiServerURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerURL can be used by components like the web console to tell users where to find the Kubernetes API. controlPlaneTopology string controlPlaneTopology expresses the expectations for operands that normally run on control nodes. The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation The 'External' mode indicates that the control plane is hosted externally to the cluster and that its components are not visible within the cluster. cpuPartitioning string cpuPartitioning expresses if CPU partitioning is a currently enabled feature in the cluster. CPU Partitioning means that this cluster can support partitioning workloads to specific CPU Sets. Valid values are "None" and "AllNodes". When omitted, the default value is "None". The default value of "None" indicates that no nodes will be setup with CPU partitioning. The "AllNodes" value indicates that all nodes have been setup with CPU partitioning, and can then be further configured via the PerformanceProfile API. etcdDiscoveryDomain string etcdDiscoveryDomain is the domain used to fetch the SRV records for discovering etcd servers and clients. For more info: https://github.com/etcd-io/etcd/blob/329be66e8b3f9e2e6af83c123ff89297e49ebd15/Documentation/op-guide/clustering.md#dns-discovery deprecated: as of 4.7, this field is no longer set or honored. It will be removed in a future release. infrastructureName string infrastructureName uniquely identifies a cluster with a human friendly name. Once set it should not be changed. Must be of max length 27 and must have only alphanumeric or hyphen characters. infrastructureTopology string infrastructureTopology expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master . The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation NOTE: External topology mode is not applicable for this field. platform string platform is the underlying infrastructure provider for the cluster. Deprecated: Use platformStatus.type instead. platformStatus object platformStatus holds status information specific to the underlying infrastructure provider. 15.1.35. .status.platformStatus Description platformStatus holds status information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object External contains settings specific to the generic External infrastructure provider. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. This value will be synced with to the status.platform and status.platformStatus.type . Currently this value cannot be changed once set. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.36. .status.platformStatus.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object Required region Property Type Description region string region specifies the region for Alibaba Cloud resources created for the cluster. resourceGroupID string resourceGroupID is the ID of the resource group for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. resourceTags[] object AlibabaCloudResourceTag is the set of tags to add to apply to resources. 15.1.37. .status.platformStatus.alibabaCloud.resourceTags Description resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. Type array 15.1.38. .status.platformStatus.alibabaCloud.resourceTags[] Description AlibabaCloudResourceTag is the set of tags to add to apply to resources. Type object Required key value Property Type Description key string key is the key of the tag. value string value is the value of the tag. 15.1.39. .status.platformStatus.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description region string region holds the default AWS region for new AWS resources created by the cluster. resourceTags array resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. resourceTags[] object AWSResourceTag is a tag to apply to AWS resources created for the cluster. serviceEndpoints array ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.40. .status.platformStatus.aws.resourceTags Description resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. Type array 15.1.41. .status.platformStatus.aws.resourceTags[] Description AWSResourceTag is a tag to apply to AWS resources created for the cluster. Type object Required key value Property Type Description key string key is the key of the tag value string value is the value of the tag. Some AWS service do not support empty values. Since tags are added to resources in many services, the length of the tag value must meet the requirements of all services. 15.1.42. .status.platformStatus.aws.serviceEndpoints Description ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.43. .status.platformStatus.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.44. .status.platformStatus.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object Property Type Description armEndpoint string armEndpoint specifies a URL to use for resource management in non-soverign clouds such as Azure Stack. cloudName string cloudName is the name of the Azure cloud environment which can be used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the value is equal to AzurePublicCloud . networkResourceGroupName string networkResourceGroupName is the Resource Group for network resources like the Virtual Network and Subnets used by the cluster. If empty, the value is same as ResourceGroupName. resourceGroupName string resourceGroupName is the Resource Group for new Azure resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. resourceTags[] object AzureResourceTag is a tag to apply to Azure resources created for the cluster. 15.1.45. .status.platformStatus.azure.resourceTags Description resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. Type array 15.1.46. .status.platformStatus.azure.resourceTags[] Description AzureResourceTag is a tag to apply to Azure resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 128 characters and cannot be empty. Key must begin with a letter, end with a letter, number or underscore, and must contain only alphanumeric characters and the following special characters _ . - . value string value is the value part of the tag. A tag value can have a maximum of 256 characters and cannot be empty. Value must contain only alphanumeric characters and the following special characters _ + , - . / : ; < = > ? @ . 15.1.47. .status.platformStatus.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for BareMetal deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.48. .status.platformStatus.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.49. .status.platformStatus.external Description External contains settings specific to the generic External infrastructure provider. Type object Property Type Description cloudControllerManager object cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.50. .status.platformStatus.external.cloudControllerManager Description cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. Type object Property Type Description state string state determines whether or not an external Cloud Controller Manager is expected to be installed within the cluster. https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager Valid values are "External", "None" and omitted. When set to "External", new nodes will be tainted as uninitialized when created, preventing them from running workloads until they are initialized by the cloud controller manager. When omitted or set to "None", new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.51. .status.platformStatus.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object Property Type Description projectID string resourceGroupName is the Project ID for new GCP resources created for the cluster. region string region holds the region for new GCP resources created for the cluster. 15.1.52. .status.platformStatus.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain location string Location is where the cluster has been deployed providerType string ProviderType indicates the type of cluster that was created resourceGroupName string ResourceGroupName is the Resource Group for new IBMCloud resources created for the cluster. 15.1.53. .status.platformStatus.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.54. .status.platformStatus.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. 15.1.55. .status.platformStatus.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. cloudName string cloudName is the name of the desired OpenStack cloud in the client configuration file ( clouds.yaml ). ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for OpenStack deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.56. .status.platformStatus.openstack.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on OpenStack platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.57. .status.platformStatus.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string deprecated: as of 4.6, this field is no longer set or honored. It will be removed in a future release. 15.1.58. .status.platformStatus.powervs Description PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain region string region holds the default Power VS region for new Power VS resources created by the cluster. resourceGroup string resourceGroup is the resource group name for new IBMCloud resources created for a cluster. The resource group specified here will be used by cluster-image-registry-operator to set up a COS Instance in IBMCloud for the cluster registry. More about resource groups can be found here: https://cloud.ibm.com/docs/account?topic=account-rgs . When omitted, the image registry operator won't be able to configure storage, which results in the image registry cluster operator not being in an available state. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. zone string zone holds the default zone for the new Power VS resources created by the cluster. Note: Currently only single-zone OCP clusters are supported 15.1.59. .status.platformStatus.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.60. .status.platformStatus.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.61. .status.platformStatus.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for vSphere deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/infrastructures DELETE : delete collection of Infrastructure GET : list objects of kind Infrastructure POST : create an Infrastructure /apis/config.openshift.io/v1/infrastructures/{name} DELETE : delete an Infrastructure GET : read the specified Infrastructure PATCH : partially update the specified Infrastructure PUT : replace the specified Infrastructure /apis/config.openshift.io/v1/infrastructures/{name}/status GET : read status of the specified Infrastructure PATCH : partially update status of the specified Infrastructure PUT : replace status of the specified Infrastructure 15.2.1. /apis/config.openshift.io/v1/infrastructures Table 15.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Infrastructure Table 15.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Infrastructure Table 15.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.5. HTTP responses HTTP code Reponse body 200 - OK InfrastructureList schema 401 - Unauthorized Empty HTTP method POST Description create an Infrastructure Table 15.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.7. Body parameters Parameter Type Description body Infrastructure schema Table 15.8. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 202 - Accepted Infrastructure schema 401 - Unauthorized Empty 15.2.2. /apis/config.openshift.io/v1/infrastructures/{name} Table 15.9. Global path parameters Parameter Type Description name string name of the Infrastructure Table 15.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Infrastructure Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.12. Body parameters Parameter Type Description body DeleteOptions schema Table 15.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Infrastructure Table 15.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.15. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Infrastructure Table 15.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.17. Body parameters Parameter Type Description body Patch schema Table 15.18. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Infrastructure Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Infrastructure schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty 15.2.3. /apis/config.openshift.io/v1/infrastructures/{name}/status Table 15.22. Global path parameters Parameter Type Description name string name of the Infrastructure Table 15.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Infrastructure Table 15.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.25. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Infrastructure Table 15.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.27. Body parameters Parameter Type Description body Patch schema Table 15.28. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Infrastructure Table 15.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.30. Body parameters Parameter Type Description body Infrastructure schema Table 15.31. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/infrastructure-config-openshift-io-v1
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.12 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/operatorhub_apis/index
Chapter 6. ClusterVersion [config.openshift.io/v1]
Chapter 6. ClusterVersion [config.openshift.io/v1] Description ClusterVersion is the configuration for the ClusterVersionOperator. This is where parameters related to automatic updates can be set. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. status object status contains information about the available updates and any in-progress updates. 6.1.1. .spec Description spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. Type object Required clusterID Property Type Description capabilities object capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. channel string channel is an identifier for explicitly requesting that a non-default set of updates be applied to this cluster. The default channel will be contain stable updates that are appropriate for production clusters. clusterID string clusterID uniquely identifies this cluster. This is expected to be an RFC4122 UUID value (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx in hexadecimal values). This is a required field. desiredUpdate object desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. Some of the fields are inter-related with restrictions and meanings described here. 1. image is specified, version is specified, architecture is specified. API validation error. 2. image is specified, version is specified, architecture is not specified. You should not do this. version is silently ignored and image is used. 3. image is specified, version is not specified, architecture is specified. API validation error. 4. image is specified, version is not specified, architecture is not specified. image is used. 5. image is not specified, version is specified, architecture is specified. version and desired architecture are used to select an image. 6. image is not specified, version is specified, architecture is not specified. version and current architecture are used to select an image. 7. image is not specified, version is not specified, architecture is specified. API validation error. 8. image is not specified, version is not specified, architecture is not specified. API validation error. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. overrides array overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. overrides[] object ComponentOverride allows overriding cluster version operator's behavior for a component. upstream string upstream may be used to specify the preferred update server. By default it will use the appropriate update server for the cluster and region. 6.1.2. .spec.capabilities Description capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. Type object Property Type Description additionalEnabledCapabilities array (string) additionalEnabledCapabilities extends the set of managed capabilities beyond the baseline defined in baselineCapabilitySet. The default is an empty set. baselineCapabilitySet string baselineCapabilitySet selects an initial set of optional capabilities to enable, which can be extended via additionalEnabledCapabilities. If unset, the cluster will choose a default, and the default may change over time. The current default is vCurrent. 6.1.3. .spec.desiredUpdate Description desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. Some of the fields are inter-related with restrictions and meanings described here. 1. image is specified, version is specified, architecture is specified. API validation error. 2. image is specified, version is specified, architecture is not specified. You should not do this. version is silently ignored and image is used. 3. image is specified, version is not specified, architecture is specified. API validation error. 4. image is specified, version is not specified, architecture is not specified. image is used. 5. image is not specified, version is specified, architecture is specified. version and desired architecture are used to select an image. 6. image is not specified, version is specified, architecture is not specified. version and current architecture are used to select an image. 7. image is not specified, version is not specified, architecture is specified. API validation error. 8. image is not specified, version is not specified, architecture is not specified. API validation error. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. Type object Property Type Description architecture string architecture is an optional field that indicates the desired value of the cluster architecture. In this context cluster architecture means either a single architecture or a multi architecture. architecture can only be set to Multi thereby only allowing updates from single to multi architecture. If architecture is set, image cannot be set and version must be set. Valid values are 'Multi' and empty. force boolean force allows an administrator to update to an image that has failed verification or upgradeable checks. This option should only be used when the authenticity of the provided image has been verified out of band because the provided image will run with full administrative access to the cluster. Do not use this flag with images that comes from unknown or potentially malicious sources. image string image is a container image location that contains the update. image should be used when the desired version does not exist in availableUpdates or history. When image is set, version is ignored. When image is set, version should be empty. When image is set, architecture cannot be specified. version string version is a semantic version identifying the update version. version is ignored if image is specified and required if architecture is specified. 6.1.4. .spec.overrides Description overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. Type array 6.1.5. .spec.overrides[] Description ComponentOverride allows overriding cluster version operator's behavior for a component. Type object Required group kind name namespace unmanaged Property Type Description group string group identifies the API group that the kind is in. kind string kind indentifies which object to override. name string name is the component's name. namespace string namespace is the component's namespace. If the resource is cluster scoped, the namespace should be empty. unmanaged boolean unmanaged controls if cluster version operator should stop managing the resources in this cluster. Default: false 6.1.6. .status Description status contains information about the available updates and any in-progress updates. Type object Required desired observedGeneration versionHash Property Type Description availableUpdates `` availableUpdates contains updates recommended for this cluster. Updates which appear in conditionalUpdates but not in availableUpdates may expose this cluster to known issues. This list may be empty if no updates are recommended, if the update service is unavailable, or if an invalid channel has been specified. capabilities object capabilities describes the state of optional, core cluster components. conditionalUpdates array conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. conditionalUpdates[] object ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. conditions array conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. conditions[] object ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. desired object desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. history array history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. history[] object UpdateHistory is a single attempted update to the cluster. observedGeneration integer observedGeneration reports which version of the spec is being synced. If this value is not equal to metadata.generation, then the desired and conditions fields may represent a version. versionHash string versionHash is a fingerprint of the content that the cluster will be updated with. It is used by the operator to avoid unnecessary work and is for internal use only. 6.1.7. .status.capabilities Description capabilities describes the state of optional, core cluster components. Type object Property Type Description enabledCapabilities array (string) enabledCapabilities lists all the capabilities that are currently managed. knownCapabilities array (string) knownCapabilities lists all the capabilities known to the current cluster. 6.1.8. .status.conditionalUpdates Description conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. Type array 6.1.9. .status.conditionalUpdates[] Description ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. Type object Required release risks Property Type Description conditions array conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } release object release is the target of the update. risks array risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. risks[] object ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. 6.1.10. .status.conditionalUpdates[].conditions Description conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. Type array 6.1.11. .status.conditionalUpdates[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.12. .status.conditionalUpdates[].release Description release is the target of the update. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic version identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.13. .status.conditionalUpdates[].risks Description risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. Type array 6.1.14. .status.conditionalUpdates[].risks[] Description ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. Type object Required matchingRules message name url Property Type Description matchingRules array matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. matchingRules[] object ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. message string message provides additional information about the risk of updating, in the event that matchingRules match the cluster state. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. name string name is the CamelCase reason for not recommending a conditional update, in the event that matchingRules match the cluster state. url string url contains information about this risk. 6.1.15. .status.conditionalUpdates[].risks[].matchingRules Description matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. Type array 6.1.16. .status.conditionalUpdates[].risks[].matchingRules[] Description ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. Type object Required type Property Type Description promql object promQL represents a cluster condition based on PromQL. type string type represents the cluster-condition type. This defines the members and semantics of any additional properties. 6.1.17. .status.conditionalUpdates[].risks[].matchingRules[].promql Description promQL represents a cluster condition based on PromQL. Type object Required promql Property Type Description promql string PromQL is a PromQL query classifying clusters. This query query should return a 1 in the match case and a 0 in the does-not-match case. Queries which return no time series, or which return values besides 0 or 1, are evaluation failures. 6.1.18. .status.conditions Description conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. Type array 6.1.19. .status.conditions[] Description ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 6.1.20. .status.desired Description desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic version identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.21. .status.history Description history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. Type array 6.1.22. .status.history[] Description UpdateHistory is a single attempted update to the cluster. Type object Required image startedTime state verified Property Type Description acceptedRisks string acceptedRisks records risks which were accepted to initiate the update. For example, it may menition an Upgradeable=False or missing signature that was overriden via desiredUpdate.force, or an update that was initiated despite not being in the availableUpdates set of recommended update targets. completionTime `` completionTime, if set, is when the update was fully applied. The update that is currently being applied will have a null completion time. Completion time will always be set for entries that are not the current update (usually to the started time of the update). image string image is a container image location that contains the update. This value is always populated. startedTime string startedTime is the time at which the update was started. state string state reflects whether the update was fully applied. The Partial state indicates the update is not fully applied, while the Completed state indicates the update was successfully rolled out at least once (all parts of the update successfully applied). verified boolean verified indicates whether the provided update was properly verified before it was installed. If this is false the cluster may not be trusted. Verified does not cover upgradeable checks that depend on the cluster state at the time when the update target was accepted. version string version is a semantic version identifying the update version. If the requested image does not define a version, or if a failure occurs retrieving the image, this value may be empty. 6.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/clusterversions DELETE : delete collection of ClusterVersion GET : list objects of kind ClusterVersion POST : create a ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name} DELETE : delete a ClusterVersion GET : read the specified ClusterVersion PATCH : partially update the specified ClusterVersion PUT : replace the specified ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name}/status GET : read status of the specified ClusterVersion PATCH : partially update status of the specified ClusterVersion PUT : replace status of the specified ClusterVersion 6.2.1. /apis/config.openshift.io/v1/clusterversions Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterVersion Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterVersion Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ClusterVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterVersion Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ClusterVersion schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 202 - Accepted ClusterVersion schema 401 - Unauthorized Empty 6.2.2. /apis/config.openshift.io/v1/clusterversions/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the ClusterVersion Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterVersion Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterVersion Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterVersion Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterVersion Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ClusterVersion schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty 6.2.3. /apis/config.openshift.io/v1/clusterversions/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the ClusterVersion Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterVersion Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterVersion Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterVersion Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body ClusterVersion schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/clusterversion-config-openshift-io-v1
3.5. Supported PKIX Formats and Protocols
3.5. Supported PKIX Formats and Protocols The Certificate System supports many of the protocols and formats defined in Public-Key Infrastructure (X.509) by the IETF. In addition to the PKIX standards listed here, other PKIX-listed standards are available at the IETF Datatracker website. Table 3.1. PKIX Standards Supported in Certificate System 10 Format or Protocol RFC or Draft Description X.509 version 1 and version 3 Digital certificate formats recommended by the International Telecommunications Union (ITU). Certificate Request Message Format (CRMF) RFC 4211 A message format to send a certificate request to a CA. Certificate Management Message Formats (CMMF) Message formats to send certificate requests and revocation requests from end entities to a CA and to return information to end entities. CMMF has been subsumed by another standard, CMC. Certificate Management Messages over CS (CMC) RFC 5274 A general interface to public-key certification products based on CS and PKCS #10, including a certificate enrollment protocol for RSA-signed certificates with Diffie-Hellman public-keys. CMC incorporates CRMF and CMMF. Cryptographic Message Syntax (CMS) RFC 2630 A superset of PKCS #7 syntax used for digital signatures and encryption. PKIX Certificate and CRL Profile RFC 5280 A standard developed by the IETF for a public-key infrastructure for the Internet. It specifies profiles for certificates and CRLs. Online Certificate Status Protocol (OCSP) RFC 6960 A protocol useful in determining the current status of a digital certificate without requiring CRLs.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/sect-deployment_guide-support_for_open_standards-certificate_management_formats_and_protocols
function::ns_gid
function::ns_gid Name function::ns_gid - Returns the group ID of a target process as seen in a user namespace Synopsis Arguments None Description This function returns the group ID of a target process as seen in the target user namespace if provided, or the stap process namespace.
[ "ns_gid:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ns-gid
Chapter 2. Configuring Data Grid Servers
Chapter 2. Configuring Data Grid Servers Apply custom Data Grid Server configuration to your deployments. 2.1. Customizing Data Grid Server configuration Apply custom deploy.infinispan values to Data Grid clusters that configure the Cache Manager and underlying server mechanisms like security realms or Hot Rod and REST endpoints. Important You must always provide a complete Data Grid Server configuration when you modify deploy.infinispan values. Note Do not modify or remove the default "metrics" configuration if you want to use monitoring capabilities for your Data Grid cluster. Procedure Modify Data Grid Server configuration as required: Specify configuration values for the Cache Manager with deploy.infinispan.cacheContainer fields. For example, you can create caches at startup with any Data Grid configuration or add cache templates and use them to create caches on demand. Configure security authorization to control user roles and permissions with the deploy.infinispan.cacheContainer.security.authorization field. Select one of the default JGroups stacks or configure cluster transport with the deploy.infinispan.cacheContainer.transport fields. Configure Data Grid Server endpoints with the deploy.infinispan.server.endpoints fields. Configure Data Grid Server network interfaces and ports with the deploy.infinispan.server.interfaces and deploy.infinispan.server.socketBindings fields. Configure Data Grid Server security mechanisms with the deploy.infinispan.server.security fields. 2.2. Data Grid Server configuration values Data Grid Server configuration values let you customize the Cache Manager and modify server instances that run in OpenShift pods. Data Grid Server configuration deploy: infinispan: cacheContainer: # [USER] Add cache, template, and counter configuration. name: default # [USER] Specify `security: null` to disable security authorization. security: authorization: {} transport: cluster: USD{infinispan.cluster.name:cluster} node-name: USD{infinispan.node.name:} stack: kubernetes server: endpoints: # [USER] Hot Rod and REST endpoints. - securityRealm: default socketBinding: default # [METRICS] Metrics endpoint for cluster monitoring capabilities. - connectors: rest: restConnector: authentication: mechanisms: BASIC securityRealm: metrics socketBinding: metrics interfaces: - inetAddress: value: USD{infinispan.bind.address:127.0.0.1} name: public security: credentialStores: - clearTextCredential: clearText: secret name: credentials path: credentials.pfx securityRealms: # [USER] Security realm for the Hot Rod and REST endpoints. - name: default # [USER] Comment or remove this properties realm to disable authentication. propertiesRealm: groupProperties: path: groups.properties groupsAttribute: Roles userProperties: path: users.properties # [METRICS] Security realm for the metrics endpoint. - name: metrics propertiesRealm: groupProperties: path: metrics-groups.properties relativeTo: infinispan.server.config.path groupsAttribute: Roles userProperties: path: metrics-users.properties plainText: true relativeTo: infinispan.server.config.path socketBindings: defaultInterface: public portOffset: USD{infinispan.socket.binding.port-offset:0} socketBinding: # [USER] Socket binding for the Hot Rod and REST endpoints. - name: default port: 11222 # [METRICS] Socket binding for the metrics endpoint. - name: metrics port: 11223 Data Grid cache configuration deploy: infinispan: cacheContainer: distributedCache: name: "mycache" mode: "SYNC" owners: "2" segments: "256" capacityFactor: "1.0" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Cache template deploy: infinispan: cacheContainer: distributedCacheConfiguration: name: "my-dist-template" mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Cluster transport deploy: infinispan: cacheContainer: transport: #Specifies the name of a default JGroups stack. stack: kubernetes #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Additional resources Data Grid Server Guide Configuring Data Grid
[ "deploy: infinispan: cacheContainer: # [USER] Add cache, template, and counter configuration. name: default # [USER] Specify `security: null` to disable security authorization. security: authorization: {} transport: cluster: USD{infinispan.cluster.name:cluster} node-name: USD{infinispan.node.name:} stack: kubernetes server: endpoints: # [USER] Hot Rod and REST endpoints. - securityRealm: default socketBinding: default # [METRICS] Metrics endpoint for cluster monitoring capabilities. - connectors: rest: restConnector: authentication: mechanisms: BASIC securityRealm: metrics socketBinding: metrics interfaces: - inetAddress: value: USD{infinispan.bind.address:127.0.0.1} name: public security: credentialStores: - clearTextCredential: clearText: secret name: credentials path: credentials.pfx securityRealms: # [USER] Security realm for the Hot Rod and REST endpoints. - name: default # [USER] Comment or remove this properties realm to disable authentication. propertiesRealm: groupProperties: path: groups.properties groupsAttribute: Roles userProperties: path: users.properties # [METRICS] Security realm for the metrics endpoint. - name: metrics propertiesRealm: groupProperties: path: metrics-groups.properties relativeTo: infinispan.server.config.path groupsAttribute: Roles userProperties: path: metrics-users.properties plainText: true relativeTo: infinispan.server.config.path socketBindings: defaultInterface: public portOffset: USD{infinispan.socket.binding.port-offset:0} socketBinding: # [USER] Socket binding for the Hot Rod and REST endpoints. - name: default port: 11222 # [METRICS] Socket binding for the metrics endpoint. - name: metrics port: 11223", "deploy: infinispan: cacheContainer: distributedCache: name: \"mycache\" mode: \"SYNC\" owners: \"2\" segments: \"256\" capacityFactor: \"1.0\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances.", "deploy: infinispan: cacheContainer: distributedCacheConfiguration: name: \"my-dist-template\" mode: \"SYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances.", "deploy: infinispan: cacheContainer: transport: #Specifies the name of a default JGroups stack. stack: kubernetes #Provide additional Cache Manager configuration. server: #Provide configuration for server instances." ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers
Chapter 4. Serving
Chapter 4. Serving 4.1. Getting started with Knative Serving 4.1.1. Serverless applications Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object. Example Knative Service object YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: "Hello Serverless!" 1 The name of the application. 2 The namespace the application uses. 3 The image of the application. 4 The environment variable printed out by the sample application. You can create a serverless application by using one of the following methods: Create a Knative service from the OpenShift Container Platform web console. See Creating applications using the Developer perspective for more information. Create a Knative service by using the Knative ( kn ) CLI. Create and apply a Knative Service object as a YAML file, by using the oc CLI. 4.1.1.1. Creating serverless applications by using the Knative CLI Using the Knative ( kn ) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service: USD kn service create <service-name> --image <image> --tag <tag-value> Where: --image is the URI of the image for the application. --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service. Example command USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Example output Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration "event-display" is waiting for a Revision to become ready. 3.857s ... 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing 4.1.1.2. Creating serverless applications using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a serverless application by using YAML, you must create a YAML file that defines a Knative Service object, then apply it by using oc apply . After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the OpenShift CLI ( oc ). Procedure Create a YAML file containing the following sample code: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: "Hello Serverless!" Navigate to the directory where the YAML file is contained, and deploy the application by applying the YAML file: USD oc apply -f <filename> If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative ( kn ) CLI or YAML files, you can create Knative components by using the Administator perspective of the OpenShift Container Platform web console. 4.1.1.3. Creating serverless applications using the Administrator perspective Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object. Example Knative Service object YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: "Hello Serverless!" 1 The name of the application. 2 The namespace the application uses. 3 The image of the application. 4 The environment variable printed out by the sample application. After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic. Prerequisites To create serverless applications using the Administrator perspective, ensure that you have completed the following steps. The OpenShift Serverless Operator and Knative Serving are installed. You have logged in to the web console and are in the Administrator perspective. Procedure Navigate to the Serverless Serving page. In the Create list, select Service . Manually enter YAML or JSON definitions, or by dragging and dropping a file into the editor. Click Create . 4.1.1.4. Creating a service using offline mode You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. Procedure In offline mode, create a local Knative service descriptor file: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest \ --target ./ \ --namespace test Example output Service 'event-display' created in namespace 'test'. The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree. If you do not specify an existing directory, but use a filename, such as --target my-service.yaml , then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory. The filename can have the .yaml , .yml , or .json extension. Choosing .json creates the service descriptor file in the JSON format. The --namespace test option places the new service in the test namespace. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace. Examine the created directory structure: USD tree ./ Example output ./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace. The test/ directory contains the ksvc directory, named after the resource type. The ksvc directory contains the descriptor file event-display.yaml , named according to the specified service name. Examine the generated service descriptor file: USD cat test/ksvc/event-display.yaml Example output apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: "" resources: {} status: {} List information about the new service: USD kn service describe event-display --target ./ --namespace test Example output Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories. Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml , .yml , and .json . The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory. Use the service descriptor file to create the service on the cluster: USD kn service create -f test/ksvc/event-display.yaml Example output Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s ... 0.168s Configuration "event-display" is waiting for a Revision to become ready. 23.377s ... 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com 4.1.1.5. Additional resources Knative Serving CLI commands Configuring JSON Web Token authentication for Knative services 4.1.2. Verifying your serverless application deployment To verify that your serverless application has been deployed successfully, you must get the application URL created by Knative, and then send a request to that URL and observe the output. OpenShift Serverless supports the use of both HTTP and HTTPS URLs, however the output from oc get ksvc always prints URLs using the http:// format. 4.1.2.1. Verifying your serverless application deployment To verify that your serverless application has been deployed successfully, you must get the application URL created by Knative, and then send a request to that URL and observe the output. OpenShift Serverless supports the use of both HTTP and HTTPS URLs, however the output from oc get ksvc always prints URLs using the http:// format. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the oc CLI. You have created a Knative service. Prerequisites Install the OpenShift CLI ( oc ). Procedure Find the application URL: USD oc get ksvc <service_name> Example output NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True Make a request to your cluster and observe the output. Example HTTP request USD curl http://event-delivery-default.example.com Example HTTPS request USD curl https://event-delivery-default.example.com Example output Hello Serverless! Optional. If you receive an error relating to a self-signed certificate in the certificate chain, you can add the --insecure flag to the curl command to ignore the error: USD curl https://event-delivery-default.example.com --insecure Example output Hello Serverless! Important Self-signed certificates must not be used in a production deployment. This method is only for testing purposes. Optional. If your OpenShift Container Platform cluster is configured with a certificate that is signed by a certificate authority (CA) but not yet globally configured for your system, you can specify this with the curl command. The path to the certificate can be passed to the curl command by using the --cacert flag: USD curl https://event-delivery-default.example.com --cacert <file> Example output Hello Serverless! 4.2. Autoscaling 4.2.1. Autoscaling Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. For example, if an application is receiving no traffic, and scale-to-zero is enabled, Knative Serving scales the application down to zero replicas. If scale-to-zero is disabled, the application is scaled down to the minimum number of replicas configured for applications on the cluster. Replicas can also be scaled up to meet demand if traffic to the application increases. Autoscaling settings for Knative services can be global settings that are configured by cluster administrators, or per-revision settings that are configured for individual services. You can modify per-revision settings for your services by using the OpenShift Container Platform web console, by modifying the YAML file for your service, or by using the Knative ( kn ) CLI. Note Any limits or targets that you set for a service are measured against a single instance of your application. For example, setting the target annotation to 50 configures the autoscaler to scale the application so that each revision handles 50 requests at a time. 4.2.2. Scale bounds Scale bounds determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. 4.2.2.1. Minimum scale bounds The minimum number of replicas that can serve an application is determined by the min-scale annotation. If scale to zero is not enabled, the min-scale value defaults to 1 . The min-scale value defaults to 0 replicas if the following conditions are met: The min-scale annotation is not set Scaling to zero is enabled The class KPA is used Example service spec with min-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: "0" ... 4.2.2.1.1. Setting the min-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the min-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-min flag to create or modify the min-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the minimum number of replicas for the service by using the --scale-min flag: USD kn service create <service_name> --image <image_uri> --scale-min <integer> Example command USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2 4.2.2.2. Maximum scale bounds The maximum number of replicas that can serve an application is determined by the max-scale annotation. If the max-scale annotation is not set, there is no upper limit for the number of replicas created. Example service spec with max-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: "10" ... 4.2.2.2.1. Setting the max-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the max-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-max flag to create or modify the max-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the maximum number of replicas for the service by using the --scale-max flag: USD kn service create <service_name> --image <image_uri> --scale-max <integer> Example command USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10 4.2.3. Concurrency Concurrency determines the number of simultaneous requests that can be processed by each replica of an application at any given time. Concurrency can be configured as a soft limit or a hard limit : A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. A hard limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. Important Using a hard limit configuration is only recommended if there is a clear use case for it with your application. Having a low, hard limit specified may have a negative impact on the throughput and latency of an application, and might cause cold starts. Adding a soft target and a hard limit means that the autoscaler targets the soft target number of concurrent requests, but imposes a hard limit of the hard limit value for the maximum number of requests. If the hard limit value is less than the soft limit value, the soft limit value is tuned down, because there is no need to target more requests than the number that can actually be handled. 4.2.3.1. Configuring a soft concurrency target A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. You can specify a soft concurrency target for your Knative service by setting the autoscaling.knative.dev/target annotation in the spec, or by using the kn service command with the correct flags. Procedure Optional: Set the autoscaling.knative.dev/target annotation for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: "200" Optional: Use the kn service command to specify the --concurrency-target flag: USD kn service create <service_name> --image <image_uri> --concurrency-target <integer> Example command to create a service with a concurrency target of 50 requests USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50 4.2.3.2. Configuring a hard concurrency limit A hard concurrency limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. You can specify a hard concurrency limit for your Knative service by modifying the containerConcurrency spec, or by using the kn service command with the correct flags. Procedure Optional: Set the containerConcurrency spec for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50 The default value is 0 , which means that there is no limit on the number of simultaneous requests that are permitted to flow into one replica of the service at a time. A value greater than 0 specifies the exact number of requests that are permitted to flow into one replica of the service at a time. This example would enable a hard concurrency limit of 50 requests. Optional: Use the kn service command to specify the --concurrency-limit flag: USD kn service create <service_name> --image <image_uri> --concurrency-limit <integer> Example command to create a service with a concurrency limit of 50 requests USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50 4.2.3.3. Concurrency target utilization This value specifies the percentage of the concurrency limit that is actually targeted by the autoscaler. This is also known as specifying the hotness at which a replica runs, which enables the autoscaler to scale up before the defined hard limit is reached. For example, if the containerConcurrency value is set to 10, and the target-utilization-percentage value is set to 70 percent, the autoscaler creates a new replica when the average number of concurrent requests across all existing replicas reaches 7. Requests numbered 7 to 10 are still sent to the existing replicas, but additional replicas are started in anticipation of being required after the containerConcurrency value is reached. Example service configured using the target-utilization-percentage annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: "70" ... 4.2.4. Scale-to-zero Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. 4.2.4.1. Enabling scale-to-zero You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: "false" 1 1 The enable-scale-to-zero spec can be either "true" or "false" . If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound . The default value is "true" . 4.2.4.2. Configuring the scale-to-zero grace period Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. You are using the default Knative Pod Autoscaler. The scale-to-zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: "30s" 1 1 The grace period time in seconds. The default value is 30 seconds. 4.3. Configuring Serverless applications 4.3.1. Overriding Knative Serving system deployment configurations You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resources (CRs). Note You can only override probes that are defined in the deployment by default. All Knative Serving deployments define a readiness and a liveness probe by default, with these exceptions: net-kourier-controller and 3scale-kourier-gateway only define a readiness probe. net-istio-controller and net-istio-webhook define no probes. 4.3.1.1. Overriding system deployment configurations Currently, overriding default configuration settings is supported for the resources , replicas , labels , annotations , and nodeSelector fields, as well as for the readiness and liveness fields for probes. In the following example, a KnativeServing CR overrides the webhook deployment so that: The readiness probe timeout for net-kourier-controller is set to be 10 seconds. The deployment has specified CPU and memory resource limits. The deployment has 3 replicas. The example-label: label label is added. The example-annotation: annotation annotation is added. The nodeSelector field is set to select nodes with the disktype: hdd label. Note The KnativeServing CR label and annotation settings override the deployment's labels and annotations for both the deployment itself and the resulting pods. KnativeServing CR example apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: net-kourier-controller readinessProbes: 1 - container: controller timeoutSeconds: 10 - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd 1 You can use the readiness and liveness probe overrides to override all fields of a probe in a container of a deployment as specified in the Kubernetes API except for the fields related to the probe handler: exec , grpc , httpGet , and tcpSocket . Additional resources Probe configuration section of the Kubernetes API documentation 4.3.2. Multi-container support for Serving You can deploy a multi-container pod by using a single Knative service. This method is useful for separating application responsibilities into smaller, specialized parts. Important Multi-container support for Serving is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.3.2.1. Configuring a multi-container service Multi-container support is enabled by default. You can create a multi-container pod by specifiying multiple containers in the service. Procedure Modify your service to include additional containers. Only one container can handle requests, so specify ports for exactly one container. Here is an example configuration with two containers: Multiple containers configuration apiVersion: serving.knative.dev/v1 kind: Service ... spec: template: spec: containers: - name: first-container 1 image: gcr.io/knative-samples/helloworld-go ports: - containerPort: 8080 2 - name: second-container 3 image: gcr.io/knative-samples/helloworld-java 1 First container configuration. 2 Port specification for the first container. 3 Second container configuration. 4.3.3. EmptyDir volumes emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted. 4.3.3.1. Configuring the EmptyDir extension The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled ... 4.3.4. Persistent Volume Claims for Serving Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services. 4.3.4.1. Enabling PVC support Procedure To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML: Enabling PVCs with write access ... spec: config: features: "kubernetes.podspec-persistent-volume-claim": enabled "kubernetes.podspec-persistent-volume-write": enabled ... The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving. The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration: Note Use the storage class that supports the access mode that you are requesting. For example, you can use the ocs-storagecluster-cephfs class for the ReadWriteMany access mode. PersistentVolumeClaim configuration apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi In this case, to claim a PV with write access, modify your service as follows: Knative service PVC configuration apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns ... spec: template: spec: containers: ... volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3 1 Volume mount specification. 2 Persistent volume claim specification. 3 Flag that enables read-only access. Note To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user. 4.3.4.2. Additional resources Understanding persistent storage 4.3.5. Init containers Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR). Note Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently. 4.3.5.1. Enabling init containers Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. Procedure Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled ... 4.3.6. Resolving image tags to digests If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution , and helps to provide consistency for deployments. 4.3.6.1. Tag-to-digest resolution To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR. If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller. 4.3.6.1.1. Configuring tag-to-digest resolution by using a secret If the controller-custom-certs spec uses the Secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Serving on your cluster. Procedure Create a secret: Example command USD oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate> Configure the controller-custom-certs spec in the KnativeServing custom resource (CR) to use the Secret type: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret 4.3.7. Configuring TLS authentication You can use Transport Layer Security (TLS) to encrypt Knative traffic and for authentication. TLS is the only supported method of traffic encryption for Knative Kafka. Red Hat recommends using both SASL and TLS together for Knative Kafka resources. Note If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure. See the documentation for Enabling Knative Serving metrics when using Service Mesh with mTLS . 4.3.7.1. Enabling TLS authentication for internal traffic OpenShift Serverless supports TLS edge termination by default, so that HTTPS traffic from end users is encrypted. However, internal traffic behind the OpenShift route is forwarded to applications by using plain data. By enabling TLS for internal traffic, the traffic sent between components is encrypted, which makes this traffic more secure. Note If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure. Important Internal TLS encryption support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have installed the OpenShift Serverless Operator and Knative Serving. You have installed the OpenShift ( oc ) CLI. Procedure Create a Knative service that includes the internal-encryption: "true" field in the spec: ... spec: config: network: internal-encryption: "true" ... Restart the activator pods in the knative-serving namespace to load the certificates: USD oc delete pod -n knative-serving --selector app=activator Additional resources Configuring TLS authentication for Kafka brokers Configuring TLS authentication for Kafka channels Enabling Knative Serving metrics when using Service Mesh with mTLS 4.3.8. Restrictive network policies 4.3.8.1. Clusters with restrictive network policies If you are using a cluster that multiple users have access to, your cluster might use network policies to control which pods, services, and namespaces can communicate with each other over the network. If your cluster uses restrictive network policies, it is possible that Knative system pods are not able to access your Knative application. For example, if your namespace has the following network policy, which denies all requests, Knative system pods cannot access your Knative application: Example NetworkPolicy object that denies all requests to the namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: [] 4.3.8.2. Enabling communication with Knative applications on a cluster with restrictive network policies To allow access to your applications from Knative system pods, you must add a label to each of the Knative system namespaces, and then create a NetworkPolicy object in your application namespace that allows access to the namespace for other namespaces that have this label. Important A network policy that denies requests to non-Knative services on your cluster still prevents access to these services. However, by allowing access from Knative system namespaces to your Knative application, you are allowing access to your Knative application from all namespaces in the cluster. If you do not want to allow access to your Knative application from all namespaces on the cluster, you might want to use JSON Web Token authentication for Knative services instead. JSON Web Token authentication for Knative services requires Service Mesh. Prerequisites Install the OpenShift CLI ( oc ). OpenShift Serverless Operator and Knative Serving are installed on your cluster. Procedure Add the knative.openshift.io/system-namespace=true label to each Knative system namespace that requires access to your application: Label the knative-serving namespace: USD oc label namespace knative-serving knative.openshift.io/system-namespace=true Label the knative-serving-ingress namespace: USD oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true Label the knative-eventing namespace: USD oc label namespace knative-eventing knative.openshift.io/system-namespace=true Label the knative-kafka namespace: USD oc label namespace knative-kafka knative.openshift.io/system-namespace=true Create a NetworkPolicy object in your application namespace to allow access from namespaces with the knative.openshift.io/system-namespace label: Example NetworkPolicy object apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: "true" podSelector: {} policyTypes: - Ingress 1 Provide a name for your network policy. 2 The namespace where your application exists. 4.4. Traffic splitting 4.4.1. Traffic splitting overview In a Knative application, traffic can be managed by creating a traffic split. A traffic split is configured as part of a route, which is managed by a Knative service. Configuring a route allows requests to be sent to different revisions of a service. This routing is determined by the traffic spec of the Service object. A traffic spec declaration consists of one or more revisions, each responsible for handling a portion of the overall traffic. The percentages of traffic routed to each revision must add up to 100%, which is ensured by a Knative validation. The revisions specified in a traffic spec can either be a fixed, named revision, or can point to the "latest" revision, which tracks the head of the list of all revisions for the service. The "latest" revision is a type of floating reference that updates if a new revision is created. Each revision can have a tag attached that creates an additional access URL for that revision. The traffic spec can be modified by: Editing the YAML of a Service object directly. Using the Knative ( kn ) CLI --traffic flag. Using the OpenShift Container Platform web console. When you create a Knative service, it does not have any default traffic spec settings. 4.4.2. Traffic spec examples The following example shows a traffic spec where 100% of traffic is routed to the latest revision of the service. Under status , you can see the name of the latest revision that latestRevision resolves to: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - latestRevision: true percent: 100 status: ... traffic: - percent: 100 revisionName: example-service The following example shows a traffic spec where 100% of traffic is routed to the revision tagged as current , and the name of that revision is specified as example-service . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0 The following example shows how the list of revisions in the traffic spec can be extended so that traffic is split between multiple revisions. This example sends 50% of traffic to the revision tagged as current , and 50% of traffic to the revision tagged as candidate . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0 4.4.3. Traffic splitting using the Knative CLI Using the Knative ( kn ) CLI to create traffic splits provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service update command to split traffic between revisions of a service. 4.4.3.1. Creating a traffic split by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a Knative service. Procedure Specify the revision of your service and what percentage of traffic you want to route to it by using the --traffic tag with a standard kn service update command: Example command USD kn service update <service_name> --traffic <revision>=<percentage> Where: <service_name> is the name of the Knative service that you are configuring traffic routing for. <revision> is the revision that you want to configure to receive a percentage of traffic. You can either specify the name of the revision, or a tag that you assigned to the revision by using the --tag flag. <percentage> is the percentage of traffic that you want to send to the specified revision. Optional: The --traffic flag can be specified multiple times in one command. For example, if you have a revision tagged as @latest and a revision named stable , you can specify the percentage of traffic that you want to split to each revision as follows: Example command USD kn service update example-service --traffic @latest=20,stable=80 If you have multiple revisions and do not specify the percentage of traffic that should be split to the last revision, the --traffic flag can calculate this automatically. For example, if you have a third revision named example , and you use the following command: Example command USD kn service update example-service --traffic @latest=10,stable=60 The remaining 30% of traffic is split to the example revision, even though it was not specified. 4.4.4. CLI flags for traffic splitting The Knative ( kn ) CLI supports traffic operations on the traffic block of a service as part of the kn service update command. 4.4.4.1. Knative CLI traffic splitting flags The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The Repetition column denotes whether repeating the particular value of flag is allowed in a kn service update command. Flag Value(s) Operation Repetition --traffic RevisionName=Percent Gives Percent traffic to RevisionName Yes --traffic Tag=Percent Gives Percent traffic to the revision having Tag Yes --traffic @latest=Percent Gives Percent traffic to the latest ready revision No --tag RevisionName=Tag Gives Tag to RevisionName Yes --tag @latest=Tag Gives Tag to the latest ready revision No --untag Tag Removes Tag from revision Yes 4.4.4.1.1. Multiple flags and order precedence All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account. The precedence of the flags as they are evaluated by kn are: --untag : All the referenced revisions with this flag are removed from the traffic block. --tag : Revisions are tagged as specified in the traffic block. --traffic : The referenced revisions are assigned a portion of the traffic split. You can add tags to revisions and then split traffic according to the tags you have set. 4.4.4.1.2. Custom URLs for revisions Assigning a --tag flag to a service by using the kn service update command creates a custom URL for the revision that is created when you update the service. The custom URL follows the pattern https://<tag>-<service_name>-<namespace>.<domain> or http://<tag>-<service_name>-<namespace>.<domain> . The --tag and --untag flags use the following syntax: Require one value. Denote a unique tag in the traffic block of the service. Can be specified multiple times in one command. 4.4.4.1.2.1. Example: Assign a tag to a revision The following example assigns the tag latest to a revision named example-revision : USD kn service update <service_name> --tag @latest=example-tag 4.4.4.1.2.2. Example: Remove a tag from a revision You can remove a tag to remove the custom URL, by using the --untag flag. Note If a revision has its tags removed, and it is assigned 0% of the traffic, the revision is removed from the traffic block entirely. The following command removes all tags from the revision named example-revision : USD kn service update <service_name> --untag example-tag 4.4.5. Splitting traffic between revisions After you create a serverless application, the application is displayed in the Topology view of the Developer perspective in the OpenShift Container Platform web console. The application revision is represented by the node, and the Knative service is indicated by a quadrilateral around the node. Any new change in the code or the service configuration creates a new revision, which is a snapshot of the code at a given time. For a service, you can manage the traffic between the revisions of the service by splitting and routing it to the different revisions as required. 4.4.5.1. Managing traffic between revisions by using the OpenShift Container Platform web console Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have logged in to the OpenShift Container Platform web console. Procedure To split traffic between multiple revisions of an application in the Topology view: Click the Knative service to see its overview in the side panel. Click the Resources tab, to see a list of Revisions and Routes for the service. Figure 4.1. Serverless application Click the service, indicated by the S icon at the top of the side panel, to see an overview of the service details. Click the YAML tab and modify the service configuration in the YAML editor, and click Save . For example, change the timeoutseconds from 300 to 301 . This change in the configuration triggers a new revision. In the Topology view, the latest revision is displayed and the Resources tab for the service now displays the two revisions. In the Resources tab, click Set Traffic Distribution to see the traffic distribution dialog box: Add the split traffic percentage portion for the two revisions in the Splits field. Add tags to create custom URLs for the two revisions. Click Save to see two nodes representing the two revisions in the Topology view. Figure 4.2. Serverless application revisions 4.4.6. Rerouting traffic using blue-green strategy You can safely reroute traffic from a production version of an app to a new version, by using a blue-green deployment strategy . 4.4.6.1. Routing and managing traffic by using a blue-green deployment strategy Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. Install the OpenShift CLI ( oc ). Procedure Create and deploy an app as a Knative service. Find the name of the first revision that was created when you deployed the service, by viewing the output from the following command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' Example command USD oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}' Example output USD example-service-00001 Add the following YAML to the service spec to send inbound traffic to the revision: ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision ... Verify that you can view your app at the URL output you get from running the following command: USD oc get ksvc <service_name> Deploy a second revision of your app by modifying at least one field in the template spec of the service and redeploying it. For example, you can modify the image of the service, or an env environment variable. You can redeploy the service by applying the service YAML file, or by using the kn service update command if you have installed the Knative ( kn ) CLI. Find the name of the second, latest revision that was created when you redeployed the service, by running the command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' At this point, both the first and second revisions of the service are deployed and running. Update your existing service to create a new, test endpoint for the second revision, while still sending all other traffic to the first revision: Example of updated service spec with test endpoint ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route ... After you redeploy this service by reapplying the YAML resource, the second revision of the app is now staged. No traffic is routed to the second revision at the main URL, and Knative creates a new service named v2 for testing the newly deployed revision. Get the URL of the new service for the second revision, by running the following command: USD oc get ksvc <service_name> --output jsonpath="{.status.traffic[*].url}" You can use this URL to validate that the new version of the app is behaving as expected before you route any traffic to it. Update your existing service again, so that 50% of traffic is sent to the first revision, and 50% is sent to the second revision: Example of updated service spec splitting traffic 50/50 between revisions ... spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2 ... When you are ready to route all traffic to the new version of the app, update the service again to send 100% of traffic to the second revision: Example of updated service spec sending all traffic to the second revision ... spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2 ... Tip You can remove the first revision instead of setting it to 0% of traffic if you do not plan to roll back the revision. Non-routeable revision objects are then garbage-collected. Visit the URL of the first revision to verify that no more traffic is being sent to the old version of the app. 4.5. External and Ingress routing 4.5.1. Routing overview Knative leverages OpenShift Container Platform TLS termination to provide routing for Knative services. When a Knative service is created, an OpenShift Container Platform route is automatically created for the service. This route is managed by the OpenShift Serverless Operator. The OpenShift Container Platform route exposes the Knative service through the same domain as the OpenShift Container Platform cluster. You can disable Operator control of OpenShift Container Platform routing so that you can configure a Knative route to directly use your TLS certificates instead. Knative routes can also be used alongside the OpenShift Container Platform route to provide additional fine-grained routing capabilities, such as traffic splitting. 4.5.1.1. Additional resources Route-specific annotations 4.5.2. Customizing labels and annotations OpenShift Container Platform routes support the use of custom labels and annotations, which you can configure by modifying the metadata spec of a Knative service. Custom labels and annotations are propagated from the service to the Knative route, then to the Knative ingress, and finally to the OpenShift Container Platform route. 4.5.2.1. Customizing labels and annotations for OpenShift Container Platform routes Prerequisites You must have the OpenShift Serverless Operator and Knative Serving installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure Create a Knative service that contains the label or annotation that you want to propagate to the OpenShift Container Platform route: To create a service by using YAML: Example service created by using YAML apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value> ... To create a service by using the Knative ( kn ) CLI, enter: Example service created by using a kn command USD kn service create <service_name> \ --image=<image> \ --annotation <annotation_name>=<annotation_value> \ --label <label_value>=<label_value> Verify that the OpenShift Container Platform route has been created with the annotation or label that you added by inspecting the output from the following command: Example command for verification USD oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=<service_name> \ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \ 2 -n knative-serving-ingress -o yaml \ | grep -e "<label_name>: \"<label_value>\"" -e "<annotation_name>: <annotation_value>" 3 1 Use the name of your service. 2 Use the namespace where your service was created. 3 Use your values for the label and annotation names and values. 4.5.3. Configuring routes for Knative services If you want to configure a Knative service to use your TLS certificate on OpenShift Container Platform, you must disable the automatic creation of a route for the service by the OpenShift Serverless Operator and instead manually create a route for the service. Note When you complete the following procedure, the default OpenShift Container Platform route in the knative-serving-ingress namespace is not created. However, the Knative route for the application is still created in this namespace. 4.5.3.1. Configuring OpenShift Container Platform routes for Knative services Prerequisites The OpenShift Serverless Operator and Knative Serving component must be installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure Create a Knative service that includes the serving.knative.openshift.io/disableRoute=true annotation: Important The serving.knative.openshift.io/disableRoute=true annotation instructs OpenShift Serverless to not automatically create a route for you. However, the service still shows a URL and reaches a status of Ready . This URL does not work externally until you create your own route with the same hostname as the hostname in the URL. Create a Knative Service resource: Example resource apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: "true" spec: template: spec: containers: - image: <image> ... Apply the Service resource: USD oc apply -f <filename> Optional. Create a Knative service by using the kn service create command: Example kn command USD kn service create <service_name> \ --image=gcr.io/knative-samples/helloworld-go \ --annotation serving.knative.openshift.io/disableRoute=true Verify that no OpenShift Container Platform route has been created for the service: Example command USD USD oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME \ -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE \ -n knative-serving-ingress You will see the following output: No resources found in knative-serving-ingress namespace. Create a Route resource in the knative-serving-ingress namespace: apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None 1 The timeout value for the OpenShift Container Platform route. You must set the same value as the max-revision-timeout-seconds setting ( 600s by default). 2 The name of the OpenShift Container Platform route. 3 The namespace for the OpenShift Container Platform route. This must be knative-serving-ingress . 4 The hostname for external access. You can set this to <service_name>-<service_namespace>.<domain> . 5 The certificates you want to use. Currently, only edge termination is supported. Apply the Route resource: USD oc apply -f <filename> 4.5.4. Global HTTPS redirection HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR). 4.5.4.1. HTTPS redirection global settings Example KnativeServing CR that enables HTTPS redirection apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: "redirected" ... 4.5.5. URL scheme for external routes The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec. 4.5.5.1. Setting the URL scheme for external routes Default spec ... spec: config: network: default-external-scheme: "https" ... You can override the default spec to use HTTP by modifying the default-external-scheme key: HTTP override spec ... spec: config: network: default-external-scheme: "http" ... 4.5.6. HTTPS redirection per service You can enable or disable HTTPS redirection for a service by configuring the networking.knative.dev/http-option annotation. 4.5.6.1. Redirecting HTTPS for a service The following example shows how you can use this annotation in a Knative Service YAML object: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: "redirected" spec: ... 4.5.7. Cluster local availability By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL. Publicly accessible URLs are accessible from outside of the cluster. However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services . Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local label to make them private. Important For OpenShift Serverless 1.15.0 and newer versions, the serving.knative.dev/visibility label is no longer available. You must update existing services to use the networking.knative.dev/visibility label instead. 4.5.7.1. Setting cluster availability to cluster local Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have created a Knative service. Procedure Set the visibility for your service by adding the networking.knative.dev/visibility=cluster-local label: USD oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local Verification Check that the URL for your service is now in the format http://<service_name>.<namespace>.svc.cluster.local , by entering the following command and reviewing the output: USD oc get ksvc Example output NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True 4.5.7.2. Enabling TLS authentication for cluster local services For cluster local services, the Kourier local gateway kourier-internal is used. If you want to use TLS traffic against the Kourier local gateway, you must configure your own server certificates in the local gateway. Prerequisites You have installed the OpenShift Serverless Operator and Knative Serving. You have administrator permissions. You have installed the OpenShift ( oc ) CLI. Procedure Deploy server certificates in the knative-serving-ingress namespace: USD export san="knative" Note Subject Alternative Name (SAN) validation is required so that these certificates can serve the request to <app_name>.<namespace>.svc.cluster.local . Generate a root key and certificate: USD openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example/CN=Example' \ -keyout ca.key \ -out ca.crt Generate a server key that uses SAN validation: USD openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key \ -subj "/CN=Example/O=Example" \ -addext "subjectAltName = DNS:USDsan" Create server certificates: USD openssl x509 -req -extfile <(printf "subjectAltName=DNS:USDsan") \ -days 365 -in tls.csr \ -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt Configure a secret for the Kourier local gateway: Deploy a secret in knative-serving-ingress namespace from the certificates created by the steps: USD oc create -n knative-serving-ingress secret tls server-certs \ --key=tls.key \ --cert=tls.crt --dry-run=client -o yaml | oc apply -f - Update the KnativeServing custom resource (CR) spec to use the secret that was created by the Kourier gateway: Example KnativeServing CR ... spec: config: kourier: cluster-cert-secret: server-certs ... The Kourier controller sets the certificate without restarting the service, so that you do not need to restart the pod. You can access the Kourier internal service with TLS through port 443 by mounting and using the ca.crt from the client. 4.5.8. Kourier Gateway service type The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR). Default spec ... spec: ingress: kourier: service-type: ClusterIP ... 4.5.8.1. Setting the Kourier Gateway service type You can override the default service type to use a load balancer service type instead by modifying the service-type spec: LoadBalancer override spec ... spec: ingress: kourier: service-type: LoadBalancer ... 4.5.9. Using HTTP2 and gRPC OpenShift Serverless supports only insecure or edge-terminated routes. Insecure or edge-terminated routes do not support HTTP2 on OpenShift Container Platform. These routes also do not support gRPC because gRPC is transported by HTTP2. If you use these protocols in your application, you must call the application using the ingress gateway directly. To do this you must find the ingress gateway's public address and the application's specific host. 4.5.9.1. Interacting with a serverless application using HTTP2 and gRPC Important This method applies to OpenShift Container Platform 4.10 and later. For older versions, see the following section. Prerequisites Install OpenShift Serverless Operator and Knative Serving on your cluster. Install the OpenShift CLI ( oc ). Create a Knative service. Upgrade OpenShift Container Platform 4.10 or later. Enable HTTP/2 on OpenShift Ingress controller. Procedure Add the serverless.openshift.io/default-enable-http2=true annotation to the KnativeServing Custom Resource: USD oc annotate knativeserving <your_knative_CR> -n knative-serving serverless.openshift.io/default-enable-http2=true After the annotation is added, you can verify that the appProtocol value of the Kourier service is h2c : USD oc get svc -n knative-serving-ingress kourier -o jsonpath="{.spec.ports[0].appProtocol}" Example output h2c Now you can use the gRPC framework over the HTTP/2 protocol for external traffic, for example: import "google.golang.org/grpc" grpc.Dial( YOUR_URL, 1 grpc.WithTransportCredentials(insecure.NewCredentials())), 2 ) 1 Your ksvc URL. 2 Your certificate. Additional resources Enabling HTTP/2 Ingress connectivity 4.5.9.2. Interacting with a serverless application using HTTP2 and gRPC in OpenShift Container Platform 4.9 and older Important This method needs to expose Kourier Gateway using the LoadBalancer service type. You can configure this by adding the following YAML to your KnativeServing custom resource definition (CRD): ... spec: ingress: kourier: service-type: LoadBalancer ... Prerequisites Install OpenShift Serverless Operator and Knative Serving on your cluster. Install the OpenShift CLI ( oc ). Create a Knative service. Procedure Find the application host. See the instructions in Verifying your serverless application deployment . Find the ingress gateway's public address: USD oc -n knative-serving-ingress get svc kourier Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m The public address is surfaced in the EXTERNAL-IP field, and in this case is a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com . Manually set the host header of your HTTP request to the application's host, but direct the request itself against the public address of the ingress gateway. USD curl -H "Host: hello-default.example.com" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com Example output Hello Serverless! You can also make a direct gRPC request against the ingress gateway: import "google.golang.org/grpc" grpc.Dial( "a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80", grpc.WithAuthority("hello-default.example.com:80"), grpc.WithInsecure(), ) Note Ensure that you append the respective port, 80 by default, to both hosts as shown in the example. 4.6. Configuring access to Knative services 4.6.1. Configuring JSON Web Token authentication for Knative services OpenShift Serverless does not currently have user-defined authorization features. To add user-defined authorization to your deployment, you must integrate OpenShift Serverless with Red Hat OpenShift Service Mesh, and then configure JSON Web Token (JWT) authentication and sidecar injection for Knative services. 4.6.2. Using JSON Web Token authentication with Service Mesh 2.x You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 2.x and OpenShift Serverless. To do this, you must create authentication requests and policies in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service. 4.6.2.1. Configuring JSON Web Token authentication for Service Mesh 2.x and OpenShift Serverless Important Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress , is not supported when Kourier is enabled. If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively . Prerequisites You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Add the sidecar.istio.io/inject="true" annotation to your service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 1 sidecar.istio.io/rewriteAppHTTPProbers: "true" 2 ... 1 Add the sidecar.istio.io/inject="true" annotation. 2 You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default. Apply the Service resource: USD oc apply -f <filename> Create a RequestAuthentication resource in each serverless application namespace that is a member in the ServiceMeshMemberRoll object: apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json Apply the RequestAuthentication resource: USD oc apply -f <filename> Allow access to the RequestAuthenticaton resource from system pods for each serverless application namespace that is a member in the ServiceMeshMemberRoll object, by creating the following AuthorizationPolicy resource: apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2 1 The path on your application to collect metrics by system pod. 2 The path on your application to probe by system pod. Apply the AuthorizationPolicy resource: USD oc apply -f <filename> For each serverless application namespace that is a member in the ServiceMeshMemberRoll object, create the following AuthorizationPolicy resource: apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: ["[email protected]/[email protected]"] Apply the AuthorizationPolicy resource: USD oc apply -f <filename> Verification If you try to use a curl request to get the Knative service URL, it is denied: Example command USD curl http://hello-example-1-default.apps.mycluster.example.com/ Example output RBAC: access denied Verify the request with a valid JWT. Get the valid JWT token: USD TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo "USDTOKEN" | cut -d '.' -f2 - | base64 --decode - Access the service by using the valid token in the curl request header: USD curl -H "Authorization: Bearer USDTOKEN" http://hello-example-1-default.apps.example.com The request is now allowed: Example output Hello OpenShift! 4.6.3. Using JSON Web Token authentication with Service Mesh 1.x You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 1.x and OpenShift Serverless. To do this, you must create a policy in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service. 4.6.3.1. Configuring JSON Web Token authentication for Service Mesh 1.x and OpenShift Serverless Important Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress , is not supported when Kourier is enabled. If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively . Prerequisites You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Add the sidecar.istio.io/inject="true" annotation to your service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 1 sidecar.istio.io/rewriteAppHTTPProbers: "true" 2 ... 1 Add the sidecar.istio.io/inject="true" annotation. 2 You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default. Apply the Service resource: USD oc apply -f <filename> Create a policy in a serverless application namespace which is a member in the ServiceMeshMemberRoll object, that only allows requests with valid JSON Web Tokens (JWT): Important The paths /metrics and /healthz must be included in excludedPaths because they are accessed from system pods in the knative-serving namespace. apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN 1 The path on your application to collect metrics by system pod. 2 The path on your application to probe by system pod. Apply the Policy resource: USD oc apply -f <filename> Verification If you try to use a curl request to get the Knative service URL, it is denied: USD curl http://hello-example-default.apps.mycluster.example.com/ Example output Origin authentication failed. Verify the request with a valid JWT. Get the valid JWT token: USD TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo "USDTOKEN" | cut -d '.' -f2 - | base64 --decode - Access the service by using the valid token in the curl request header: USD curl http://hello-example-default.apps.mycluster.example.com/ -H "Authorization: Bearer USDTOKEN" The request is now allowed: Example output Hello OpenShift! 4.7. Configuring custom domains for Knative services 4.7.1. Configuring a custom domain for a Knative service Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com . You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service. 4.7.2. Custom domain mapping You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route. 4.7.2.1. Creating a custom domain mapping You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the OpenShift Container Platform cluster. Procedure Create a YAML file containing the DomainMapping CR in the same namespace as the target CR you want to map to: apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1 1 The custom domain name that you want to map to the target CR. 2 The namespace of both the DomainMapping CR and the target CR. 3 The name of the target CR to map to the custom domain. 4 The type of CR being mapped to the custom domain. Example service domain mapping apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1 Example route domain mapping apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1 Apply the DomainMapping CR as a YAML file: USD oc apply -f <filename> 4.7.3. Custom domains for Knative services using the Knative CLI You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Knative ( kn ) CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route. 4.7.3.1. Creating a custom domain mapping by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a Knative service or route, and control a custom domain that you want to map to that CR. Note Your custom domain must point to the DNS of the OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Map a domain to a CR in the current namespace: USD kn domain create <domain_mapping_name> --ref <target_name> Example command USD kn domain create example-domain-map --ref example-service The --ref flag specifies an Addressable target CR for domain mapping. If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. Map a domain to a Knative service in a specified namespace: USD kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace> Example command USD kn domain create example-domain-map --ref ksvc:example-service:example-namespace Map a domain to a Knative route: USD kn domain create <domain_mapping_name> --ref <kroute:route_name> Example command USD kn domain create example-domain-map --ref kroute:example-route 4.7.4. Domain mapping using the Developer perspective You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Developer perspective of the OpenShift Container Platform web console to map a DomainMapping custom resource (CR) to a Knative service. 4.7.4.1. Mapping a custom domain to a service by using the Developer perspective Prerequisites You have logged in to the web console. You are in the Developer perspective. The OpenShift Serverless Operator and Knative Serving are installed on your cluster. This must be completed by a cluster administrator. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the OpenShift Container Platform cluster. Procedure Navigate to the Topology page. Right-click on the service that you want to map to a domain, and select the Edit option that contains the service name. For example, if the service is named example-service , select the Edit example-service option. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it in the Domain mapping list. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Click Save to save the changes to your service. Verification Navigate to the Topology page. Click on the service that you have created. In the Resources tab of the service information window, you can see the domain you have mapped to the service listed under Domain mappings . 4.7.5. Domain mapping using the Administrator perspective If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative ( kn ) CLI or YAML files, you can use the Administator perspective of the OpenShift Container Platform web console. 4.7.5.1. Mapping a custom domain to a service by using the Administrator perspective Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com . You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service. If you have cluster administrator permissions, you can create a DomainMapping custom resource (CR) by using the Administrator perspective in the OpenShift Container Platform web console. Prerequisites You have logged in to the web console. You are in the Administrator perspective. You have installed the OpenShift Serverless Operator. You have installed Knative Serving. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a Knative service and control a custom domain that you want to map to that service. Note Your custom domain must point to the IP address of the OpenShift Container Platform cluster. Procedure Navigate to CustomResourceDefinitions and use the search box to find the DomainMapping custom resource definition (CRD). Click the DomainMapping CRD, then navigate to the Instances tab. Click Create DomainMapping . Modify the YAML for the DomainMapping CR so that it includes the following information for your instance: apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1 1 The custom domain name that you want to map to the target CR. 2 The namespace of both the DomainMapping CR and the target CR. 3 The name of the target CR to map to the custom domain. 4 The type of CR being mapped to the custom domain. Example domain mapping to a Knative service apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: custom-ksvc-domain.example.com namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1 Verification Access the custom domain by using a curl request. For example: Example command USD curl custom-ksvc-domain.example.com Example output Hello OpenShift! 4.7.6. Securing a mapped service using a TLS certificate 4.7.6.1. Securing a service with a custom domain by using a TLS certificate After you have configured a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a Kubernetes TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created. Note If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . Prerequisites You configured a custom domain for a Knative service and have a working DomainMapping CR. You have a TLS certificate from your Certificate Authority provider or a self-signed certificate. You have obtained the cert and key files from your Certificate Authority provider, or a self-signed certificate. Install the OpenShift CLI ( oc ). Procedure Create a Kubernetes TLS secret: USD oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file> Add the networking.internal.knative.dev/certificate-uid: <id>` label to the Kubernetes TLS secret: USD oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid="<id>" If you are using a third-party secret provider such as cert-manager, you can configure your secret manager to label the Kubernetes TLS secret automatically. Cert-manager users can use the secret template offered to automatically generate secrets with the correct label. In this case, secret filtering is done based on the key only, but this value can carry useful information such as the certificate ID that the secret contains. Note The {cert-manager-operator} is a Technology Preview feature. For more information, see the Installing the {cert-manager-operator} documentation. Update the DomainMapping CR to use the TLS secret that you have created: apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 # TLS block specifies the secret to be used tls: secretName: <tls_secret_name> Verification Verify that the DomainMapping CR status is True , and that the URL column of the output shows the mapped domain with the scheme https : USD oc get domainmapping <domain_name> Example output NAME URL READY REASON example.com https://example.com True Optional: If the service is exposed publicly, verify that it is available by running the following command: USD curl https://<domain_name> If the certificate is self-signed, skip verification by adding the -k flag to the curl command. 4.7.6.2. Improving net-kourier memory usage by using secret filtering By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-kourier ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-kourier ingress controller, which enables the controller to only fetch Knative related secrets. You can enable this mechanism by setting an environment variable to the KnativeServing custom resource (CR). Important If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. A project that you created or that you have roles and permissions for to create applications and other workloads in OpenShift Container Platform. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). Procedure Set the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to true for net-kourier-controller in the KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'true' name: net-kourier-controller 4.8. Configuring high availability for Knative services 4.8.1. High availability for Knative services High availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is readily available. This controller takes over processing of the APIs that were being serviced by the controller that is now unavailable. HA in OpenShift Serverless is available through leader election, which is enabled by default after the Knative Serving or Eventing control plane is installed. When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is called the leader. 4.8.2. High availability for Knative services High availability (HA) is available by default for the Knative Serving activator , autoscaler , autoscaler-hpa , controller , webhook , kourier-control , and kourier-gateway components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeServing custom resource (CR). 4.8.2.1. Configuring high availability replicas for Knative Serving To specify three minimum replicas for the eligible deployment resources, set the value of the field spec.high-availability.replicas in the custom resource to 3 . Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHub Installed Operators . Select the knative-serving namespace. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab. Click knative-serving , then go to the YAML tab in the knative-serving page. Modify the number of replicas in the KnativeServing CR: Example YAML apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"", "kn service create <service-name> --image <image> --tag <tag-value>", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: \"Hello Serverless!\"", "oc apply -f <filename>", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test", "Service 'event-display' created in namespace 'test'.", "tree ./", "./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file", "cat test/ksvc/event-display.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}", "kn service describe event-display --target ./ --namespace test", "Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON", "kn service create -f test/ksvc/event-display.yaml", "Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com", "oc get ksvc <service_name>", "NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True", "curl http://event-delivery-default.example.com", "curl https://event-delivery-default.example.com", "Hello Serverless!", "curl https://event-delivery-default.example.com --insecure", "Hello Serverless!", "curl https://event-delivery-default.example.com --cacert <file>", "Hello Serverless!", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: \"0\"", "kn service create <service_name> --image <image_uri> --scale-min <integer>", "kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: \"10\"", "kn service create <service_name> --image <image_uri> --scale-max <integer>", "kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: \"200\"", "kn service create <service_name> --image <image_uri> --concurrency-target <integer>", "kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50", "kn service create <service_name> --image <image_uri> --concurrency-limit <integer>", "kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: \"70\"", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: \"false\" 1", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: \"30s\" 1", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: net-kourier-controller readinessProbes: 1 - container: controller timeoutSeconds: 10 - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd", "apiVersion: serving.knative.dev/v1 kind: Service spec: template: spec: containers: - name: first-container 1 image: gcr.io/knative-samples/helloworld-go ports: - containerPort: 8080 2 - name: second-container 3 image: gcr.io/knative-samples/helloworld-java", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled", "spec: config: features: \"kubernetes.podspec-persistent-volume-claim\": enabled \"kubernetes.podspec-persistent-volume-write\": enabled", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi", "apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns spec: template: spec: containers: volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled", "oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret", "spec: config: network: internal-encryption: \"true\"", "oc delete pod -n knative-serving --selector app=activator", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: []", "oc label namespace knative-serving knative.openshift.io/system-namespace=true", "oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true", "oc label namespace knative-eventing knative.openshift.io/system-namespace=true", "oc label namespace knative-kafka knative.openshift.io/system-namespace=true", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: \"true\" podSelector: {} policyTypes: - Ingress", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0", "kn service update <service_name> --traffic <revision>=<percentage>", "kn service update example-service --traffic @latest=20,stable=80", "kn service update example-service --traffic @latest=10,stable=60", "kn service update <service_name> --tag @latest=example-tag", "kn service update <service_name> --untag example-tag", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}'", "example-service-00001", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision", "oc get ksvc <service_name>", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route", "oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"", "spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2", "spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value>", "kn service create <service_name> --image=<image> --annotation <annotation_name>=<annotation_value> --label <label_value>=<label_value>", "oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=<service_name> \\ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \\ 2 -n knative-serving-ingress -o yaml | grep -e \"<label_name>: \\\"<label_value>\\\"\" -e \"<annotation_name>: <annotation_value>\" 3", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: \"true\" spec: template: spec: containers: - image: <image>", "oc apply -f <filename>", "kn service create <service_name> --image=gcr.io/knative-samples/helloworld-go --annotation serving.knative.openshift.io/disableRoute=true", "USD oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE -n knative-serving-ingress", "No resources found in knative-serving-ingress namespace.", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None", "oc apply -f <filename>", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: \"redirected\"", "spec: config: network: default-external-scheme: \"https\"", "spec: config: network: default-external-scheme: \"http\"", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: \"redirected\" spec:", "oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local", "oc get ksvc", "NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True", "export san=\"knative\"", "openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example/CN=Example' -keyout ca.key -out ca.crt", "openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key -subj \"/CN=Example/O=Example\" -addext \"subjectAltName = DNS:USDsan\"", "openssl x509 -req -extfile <(printf \"subjectAltName=DNS:USDsan\") -days 365 -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt", "oc create -n knative-serving-ingress secret tls server-certs --key=tls.key --cert=tls.crt --dry-run=client -o yaml | oc apply -f -", "spec: config: kourier: cluster-cert-secret: server-certs", "spec: ingress: kourier: service-type: ClusterIP", "spec: ingress: kourier: service-type: LoadBalancer", "oc annotate knativeserving <your_knative_CR> -n knative-serving serverless.openshift.io/default-enable-http2=true", "oc get svc -n knative-serving-ingress kourier -o jsonpath=\"{.spec.ports[0].appProtocol}\"", "h2c", "import \"google.golang.org/grpc\" grpc.Dial( YOUR_URL, 1 grpc.WithTransportCredentials(insecure.NewCredentials())), 2 )", "spec: ingress: kourier: service-type: LoadBalancer", "oc -n knative-serving-ingress get svc kourier", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m", "curl -H \"Host: hello-default.example.com\" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com", "Hello Serverless!", "import \"google.golang.org/grpc\" grpc.Dial( \"a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80\", grpc.WithAuthority(\"hello-default.example.com:80\"), grpc.WithInsecure(), )", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: [\"[email protected]/[email protected]\"]", "oc apply -f <filename>", "curl http://hello-example-1-default.apps.mycluster.example.com/", "RBAC: access denied", "TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -", "curl -H \"Authorization: Bearer USDTOKEN\" http://hello-example-1-default.apps.example.com", "Hello OpenShift!", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2", "oc apply -f <filename>", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: \"https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json\" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN", "oc apply -f <filename>", "curl http://hello-example-default.apps.mycluster.example.com/", "Origin authentication failed.", "TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -", "curl http://hello-example-default.apps.mycluster.example.com/ -H \"Authorization: Bearer USDTOKEN\"", "Hello OpenShift!", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example-domain namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1", "oc apply -f <filename>", "kn domain create <domain_mapping_name> --ref <target_name>", "kn domain create example-domain-map --ref example-service", "kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>", "kn domain create example-domain-map --ref ksvc:example-service:example-namespace", "kn domain create <domain_mapping_name> --ref <kroute:route_name>", "kn domain create example-domain-map --ref kroute:example-route", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: custom-ksvc-domain.example.com namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1", "curl custom-ksvc-domain.example.com", "Hello OpenShift!", "oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>", "oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid=\"<id>\"", "apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 TLS block specifies the secret to be used tls: secretName: <tls_secret_name>", "oc get domainmapping <domain_name>", "NAME URL READY REASON example.com https://example.com True", "curl https://<domain_name>", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'true' name: net-kourier-controller", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/serving
Chapter 7. Known issues
Chapter 7. Known issues This section describes the known issues in Red Hat OpenShift Data Foundation 4.15. 7.1. Disaster recovery Creating an application namespace for the managed clusters Application namespace needs to exist on RHACM managed clusters for disaster recovery (DR) related pre-deployment actions and hence is pre-created when an application is deployed at the RHACM hub cluster. However, if an application is deleted at the hub cluster and its corresponding namespace is deleted on the managed clusters, they reappear on the managed cluster. Workaround: openshift-dr maintains a namespace manifestwork resource in the managed cluster namespace at the RHACM hub. These resources need to be deleted after the application deletion. For example, as a cluster administrator, execute the following command on the hub cluster: ( BZ#2059669 ) ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release. ( BZ#2100920 ) Both the DRPCs protect all the persistent volume claims created on the same namespace The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies. ( BZ#2128860 ) MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups . This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs. Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors. ( BZ#2081855 ) Disaster recovery workloads remain stuck when deleted When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod . This might cause delay or failure in garbage collecting dependent DR resources such as the PVC , VolumeReplication , and VolumeReplicationGroup . It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected. Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected. ( BZ#2159791 ) When DRPolicy is applied to multiple applications under same namespace, volume replication group is not created When a DRPlacementControl (DRPC) is created for applications that are co-located with other applications in the namespace, the DRPC has no label selector set for the applications. If any subsequent changes are made to the label selector, the validating admission webhook in the OpenShift Data Foundation Hub controller rejects the changes. Workaround: Until the admission webhook is changed to allow such changes, the DRPC validatingwebhookconfigurations can be patched to remove the webhook: ( BZ#2210762 ) Application failover hangs in FailingOver state when the managed clusters are on different versions of OpenShift Container Platform and OpenShift Data Foundation Disaster Recovery solution with OpenShift Data Foundation protects and restores persistent volume claim (PVC) data in addition to the persistent volume (PV) data. If the primary cluster is on an older OpenShift Data Foundation version and the target cluster is updated to 4.15 then the failover will be stuck as the S3 store will not have the PVC data. Workaround: When upgrading the Disaster Recovery clusters, the primary cluster must be upgraded first and then the post-upgrade steps must be run. ( BZ#2215462 ) Failover of apps from c1 to c2 cluster hang in FailingOver The failover action is not disabled by Ramen when data is not uploaded to the s3 store due to s3 store misconfiguration. This means the cluster data is not available on the failover cluster during the failover. Therefore, failover cannot be completed. Workaround: Inspect the Ramen logs after initial deployment to ensure there are no s3 configuration errors reported. ( BZ#2248723 ) Potential risk of data loss after hub recovery A potential data loss risk exists following hub recovery due to an eviction routine designed to clean up orphaned resources. This routine identifies and marks AppliedManifestWorks instances lacking corresponding ManifestWorks for collection. A hardcoded grace period of one hour is provided. After this period elapses, any resources associated with the AppliedManifestWork become subject to garbage collection. If the hub cluster fails to regenerate corresponding ManifestWorks within the initial one hour window, data loss could occur. This highlights the importance of promptly addressing any issues that might prevent the recreation of ManifestWorks post-hub recovery to minimize the risk of data loss. Regional DR Cephfs based application failover show warning about subscription After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers. Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored. ( BZ-2264445 ) Disabled PeerReady flag prevents changing the action to Failover The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover. Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag. ( BZ-2264765 ) Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss. Workaround: Shut down the monitors in one of the data centers where monitors are out of quorum (you can find this by running ceph -s command) and reset the connection scores of the remaining monitors. As a result, monitors can form a quorum and Ceph becomes available again and IOs resume. ( Partner BZ#2265992 ) Cleanup and data synchronization for ApplicationSet workloads remain stuck after older primary managed cluster is recovered post the failover ApplicationSet based workload deployments to the managed clusters are not garbage collected in cases when the hub cluster fails. It is recovered to a standby hub cluster while the workload has been failed over to a surviving managed cluster. The cluster that the workload failed over from, rejoins the new recovered standby hub. ApplicationSets that are disaster recovery (DR) protected and with a regional DRPolicy starts firing the VolumeSynchronizationDelay alert. Further such DR protected workloads cannot be failed over to the peer cluster or relocated to the peer cluster as data is out of sync between the two clusters. For a workaround, see the Troubleshooting section for Regional-DR in Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads. ( BZ#2268594 ) 7.2. Multicloud Object Gateway Multicloud Object Gateway instance fails to finish initialization Due to a race in timing between the pod code run and OpenShift loading the Certificate Authority (CA) bundle into the pod, the pod is unable to communicate with the cloud storage service. As a result, default backing store cannot be created. Workaround: Restart the Multicloud Object Gateway (MCG) operator pod: With the workaround the backing store is reconciled and works. ( BZ#2269379 ) and ( BZ#2268429 ) 7.3. Ceph Poor performance of the stretch clusters on CephFS Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters. ( BZ#1982116 ) SELinux relabelling issue with a very high number of files When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251 . ( Jira#3327 ) Ceph reports no active mgr after workload deployment After workload deployment, Ceph manager loses connectivity to MONs or is unable to respond to its liveness probe. This causes the OpenShift Data Foundation cluster status to report that there is "no active mgr". This causes multiple operations that use the Ceph manager for request processing to fail. For example, volume provisioning, creating CephFS snapshots, and others. To check the status of the OpenShift Data Foundation cluster, use the command oc get cephcluster -n openshift-storage . In the status output, the status.ceph.details.MGR_DOWN field will have the message "no active mgr" if your cluster has this issue. Workaround: Restart the Ceph manager pods using the following commands: After running these commands, the OpenShift Data Foundation cluster status reports a healthy cluster, with no warnings or errors regarding MGR_DOWN . ( BZ#2244873 ) CephBlockPool creation fails when custom deviceClass is used in StorageCluster Due to a known issue, CephBlockPool creation fails when custom deviceClass is used in StorageCluster. ( BZ#2248487 ) 7.4. CSI Driver Automatic flattening of snapshots is not working When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC. To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue. If you hit this issue, contact customer support to perform manual flattening of the final restore PVCs to continue to take volume snapshot or clone of the common parent PVC again. ( BZ#2232163 ) 7.5. OpenShift Data Foundation console Missing NodeStageVolume RPC call blocks new pods from going into Running state NodeStageVolume RPC call is not being issued blocking some pods from going into Running state. The new pods are stuck in Pending forever. To workaround this issue, scale down all the affected pods at once or do a node reboot. After applying the workaround, all pods should go into Running state. ( BZ#2244353 ) 7.6. OCS operator Incorrect unit for the ceph_mds_mem_rss metric in the graph When you search for the ceph_mds_mem_rss metrics in the OpenShift user interface (UI), the graphs show the y-axis in Megabytes (MB), as Ceph returns ceph_mds_mem_rss metric in Kilobytes (KB). This can cause confusion while comparing the results for the MDSCacheUsageHigh alert. Workaround: Use ceph_mds_mem_rss * 1000 while searching this metric in the OpenShift UI to see the y-axis of the graph in GB. This makes it easier to compare the results shown in the MDSCacheUsageHigh alert. ( BZ#2261881 ) Increasing MDS memory is erasing CPU values when pods are in CLBO state When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes. Workaround: Run the oc patch command to adjust the CPU limits. For example: ( BZ#2265563 )
[ "oc delete manifestwork -n <managedCluster namespace> <drPlacementControl name>-<namespace>-ns-mw", "oc patch validatingwebhookconfigurations vdrplacementcontrol.kb.io-lq2kz --type=json --patch='[{\"op\": \"remove\", \"path\": \"/webhooks\"}]'", "oc get drpc -o yaml", "oc delete pod noobaa-operator-<ID>", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=0", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=1", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"3\"}, \"requests\": {\"cpu\": \"3\"}}}}}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/known-issues
10.3. Querying Cluster Property Settings
10.3. Querying Cluster Property Settings In most cases, when you use the pcs command to display values of the various cluster components, you can use pcs list or pcs show interchangeably. In the following examples, pcs list is the format used to display an entire list of all settings for more than one property, while pcs show is the format used to display the values of a specific property. To display the values of the property settings that have been set for the cluster, use the following pcs command. To display all of the values of the property settings for the cluster, including the default values of the property settings that have not been explicitly set, use the following command. To display the current value of a specific cluster property, use the following command. For example, to display the current value of the cluster-infrastructure property, execute the following command: For informational purposes, you can display a list of all of the default values for the properties, whether they have been set to a value other than the default or not, by using the following command.
[ "pcs property list", "pcs property list --all", "pcs property show property", "pcs property show cluster-infrastructure Cluster Properties: cluster-infrastructure: cman", "pcs property [list|show] --defaults" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-queryingclusterprops-HAAR
7.2. Order Constraints
7.2. Order Constraints Order constraints determine the order in which the resources run. Use the following command to configure an order constraint. Table 7.3, "Properties of an Order Constraint" . summarizes the properties and options for configuring order constraints. Table 7.3. Properties of an Order Constraint Field Description resource_id The name of a resource on which an action is performed. action The action to perform on a resource. Possible values of the action property are as follows: * start - Start the resource. * stop - Stop the resource. * promote - Promote the resource from a slave resource to a master resource. * demote - Demote the resource from a master resource to a slave resource. If no action is specified, the default action is start . For information on master and slave resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . kind option How to enforce the constraint. The possible values of the kind option are as follows: * Optional - Only applies if both resources are executing the specified action. For information on optional ordering, see Section 7.2.2, "Advisory Ordering" . * Mandatory - Always (default value). If the first resource you specified is stopping or cannot be started, the second resource you specified must be stopped. For information on mandatory ordering, see Section 7.2.1, "Mandatory Ordering" . * Serialize - Ensure that no two stop/start actions occur concurrently for a set of resources. symmetrical option If true, which is the default, stop the resources in the reverse order. Default value: true 7.2.1. Mandatory Ordering A mandatory constraints indicates that the second resource you specify cannot run without the first resource you specify being active. This is the default value of the kind option. Leaving the default value ensures that the second resource you specify will react when the first resource you specify changes state. If the first resource you specified was running and is stopped, the second resource you specified will also be stopped (if it is running). If the first resource you specified resource was not running and cannot be started, the resource you specified will be stopped (if it is running). If the first resource you specified is (re)started while the second resource you specified is running, the second resource you specified will be stopped and restarted. Note, however, that the cluster reacts to each state change. If the first resource is restarted and is in a started state again before the second resource initiated a stop operation, the second resource will not need to be restarted. 7.2.2. Advisory Ordering When the kind=Optional option is specified for an order constraint, the constraint is considered optional and only applies if both resources are executing the specified actions. Any change in state by the first resource you specify will have no effect on the second resource you specify. The following command configures an advisory ordering constraint for the resources named VirtualIP and dummy_resource . 7.2.3. Ordered Resource Sets A common situation is for an administrator to create a chain of ordered resources, where, for example, resource A starts before resource B which starts before resource C. If your configuration requires that you create a set of resources that is colocated and started in order, you can configure a resource group that contains those resources, as described in Section 6.5, "Resource Groups" . There are some situations, however, where configuring the resources that need to start in a specified order as a resource group is not appropriate: You may need to configure resources to start in order and the resources are not necessarily colocated. You may have a resource C that must start after either resource A or B has started but there is no relationship between A and B. You may have resources C and D that must start after both resources A and B have started, but there is no relationship between A and B or between C and D. In these situations, you can create an order constraint on a set or sets of resources with the pcs constraint order set command. You can set the following options for a set of resources with the pcs constraint order set command. sequential , which can be set to true or false to indicate whether the set of resources must be ordered relative to each other. Setting sequential to false allows a set to be ordered relative to other sets in the ordering constraint, without its members being ordered relative to each other. Therefore, this option makes sense only if multiple sets are listed in the constraint; otherwise, the constraint has no effect. require-all , which can be set to true or false to indicate whether all of the resources in the set must be active before continuing. Setting require-all to false means that only one resource in the set needs to be started before continuing on to the set. Setting require-all to false has no effect unless used in conjunction with unordered sets, which are sets for which sequential is set to false . action , which can be set to start , promote , demote or stop , as described in Table 7.3, "Properties of an Order Constraint" . You can set the following constraint options for a set of resources following the setoptions parameter of the pcs constraint order set command. id , to provide a name for the constraint you are defining. score , to indicate the degree of preference for this constraint. For information on this option, see Table 7.4, "Properties of a Colocation Constraint" . If you have three resources named D1 , D2 , and D3 , the following command configures them as an ordered resource set. 7.2.4. Removing Resources From Ordering Constraints Use the following command to remove resources from any ordering constraint.
[ "pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]", "pcs constraint order VirtualIP then dummy_resource kind=Optional", "pcs constraint order set resource1 resource2 [ resourceN ]... [ options ] [set resourceX resourceY ... [ options ]] [setoptions [ constraint_options ]]", "pcs constraint order set D1 D2 D3", "pcs constraint order remove resource1 [ resourceN ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-orderconstraints-HAAR
Chapter 1. Administering JBoss EAP
Chapter 1. Administering JBoss EAP 1.1. Downloading and installing JBoss EAP The compressed file option is a quick, platform-independent way to download and install JBoss EAP. 1.1.1. Downloading JBoss EAP You must download the JBoss EAP compressed file before you can install JBoss EAP. Prerequisites Confirm that your system meets the JBoss EAP Supported Configurations . Install the latest updates and errata patches. Set read and write access for the installation directory. Install your desired Java Development Kit (JDK). Optional: For Windows Server, set the JAVA_HOME and PATH environment variables. Procedure Log in to the Red Hat Customer Portal. Click Downloads . In the Product Downloads list, click Red Hat JBoss Enterprise Application Platform . In the Version drop-down menu, select 7.4 . Find Red Hat JBoss Enterprise Application Platform 7.4.0 in the list and click the Download link. The compressed file is downloaded to your system. Additional resources For access to Red Hat product downloads, visit the Red Hat Customer Portal . 1.1.2. Installing JBoss EAP You can install the JBoss EAP compressed file by extracting the package contents to your desired file location. Prerequisites Download JBoss EAP. Confirm that your system meets the JBoss EAP Supported Configurations . Install the latest updates and errata patches. Set read and write access for the installation directory. Install your desired Java Development Kit (JDK). For Windows Server, set the JAVA_HOME and PATH environment variables. Procedure Move the compressed file to the server and location where you want JBoss EAP to be installed. Extract the compressed file. On Linux, use the following command: On Windows Server, right-click the compressed file and select Extract All . The directory created by extracting the compressed file is the top-level directory for the JBoss EAP installation. This directory is referred to as EAP_HOME . Additional resources For more information about installing JBoss EAP using the graphical installer or RPM package installation methods, see the Installation Guide . 1.2. Starting and stopping JBoss EAP The method for starting JBoss EAP depends on whether you are running JBoss EAP as a standalone server or on servers in a managed domain. The method for stopping JBoss EAP depends on whether you are running an interactive or background instance of JBoss EAP. 1.2.1. Starting JBoss EAP as a standalone server You can run JBoss EAP as a standalone server to manage a single instance of JBoss EAP. JBoss EAP is supported on the following platforms: Red Hat Enterprise Linux Windows Server Oracle Solaris The server starts in a suspended state and does not accept requests until all required services start. After required services start, the server transitions into a normal running state and can start accepting requests. This startup script uses the EAP_HOME /bin/standalone.conf file, or standalone.conf.bat for Windows Server, to set default preferences, such as JVM options. You can customize the settings in this file. Note To see a list of startup script arguments in your terminal, use the --help argument. JBoss EAP uses the standalone.xml configuration file by default, but you can start it using a different one. Prerequisites Install JBoss EAP. Procedure Open a terminal. Start JBoss EAP as a standalone server by using the following script: For Windows Server, use the EAP_HOME \bin\standalone.bat script. Additional resources For more information about available standalone configuration files and how to use them, see the _Standalone Server Configuration Files section. For a complete listing of all available startup script arguments and their purposes, see the Server Runtime Arguments section. 1.2.2. Starting JBoss EAP for servers in a managed domain You can run JBoss EAP in a managed domain operating mode to manage several JBoss EAP instances using a single domain controller. JBoss EAP is supported on the following platforms: Red Hat Enterprise Linux Windows Server Oracle Solaris Servers start in a suspended state and do not accept requests until all required services start. After required services start, the servers transition into a normal running state and can start accepting requests. You must start the domain controller before the servers in any of the server groups in the domain. Prerequisites Install JBoss EAP. Procedure Open a terminal. Start the domain controller first and then start each associated host controller by using the following script: For Windows Server, use the EAP_HOME \bin\domain.bat script. This startup script uses the EAP_HOME /bin/domain.conf file, or domain.conf.bat for Windows Server, to set default preferences, such as JVM options. You can customize the settings in this file. JBoss EAP uses the host.xml host configuration file by default, but you can start it using a different configuration file. When setting up a managed domain, you must pass additional arguments into the startup script. Additional resources For more information about managed domain configuration files, see the Managed Domain Configuration Files section. For a complete listing of all available startup script arguments and their purposes, use the --help argument or see the Server Runtime Arguments section. 1.2.3. Stopping an interactive instance of JBoss EAP You can stop a interactive instance of a standalone server or a domain controller from the terminal where you started it. Prerequisites You started an instance of JBoss EAP. Procedure Press Ctrl+C in the terminal where you started JBoss EAP. 1.2.4. Stopping a background instance of JBoss EAP You can connect to the management CLI to shut down a running instance of a standalone server or servers in a managed domain. Prerequisites You have an instance of JBoss EAP running in a terminal. Procedure Start the management CLI by using the following script: Issue the shutdown command: When running an instance of JBoss EAP on servers in a managed domain, you must specify the host name to shut down by using the --host argument with the shutdown command. 1.3. JBoss EAP Management JBoss EAP uses a simplified configuration, with one configuration file per standalone server or managed domain. Default configuration for a standalone server is stored in the EAP_HOME /standalone/configuration/standalone.xml file and default configuration for a managed domain is stored in the EAP_HOME /domain/configuration/domain.xml file. Additionally, the default configuration for a host controller is stored in the EAP_HOME /domain/configuration/host.xml file. JBoss EAP can be configured using the command-line management CLI, web-based management console, Java API, or HTTP API. Changes made using these management interfaces persist automatically, and the XML configuration files are overwritten by the Management API. The management CLI and management console are the preferred methods, and it is not recommended to edit the XML configuration files manually. JBoss EAP supports the modification of XML configuration for standalone servers using YAML files. For more information, see Update standalone server configuration using YAML files . Note YAML configuration is not supported for servers in a managed domain. 1.3.1. Management Users The default JBoss EAP configuration provides local authentication so that a user can access the management CLI on the local host without requiring authentication. However, you must add a management user if you want to access the management CLI remotely or use the management console, which is considered remote access even if the traffic originates on the local host. If you attempt to access the management console before adding a management user, you will receive an error message. If JBoss EAP is installed using the graphical installer, then a management user is created during the installation process. This guide covers simple user management for JBoss EAP using the add-user script, which is a utility for adding new users to the properties files for out-of-the-box authentication. For more advanced authentication and authorization options, such as LDAP or Role-Based Access Control (RBAC), see the Core Management Authentication section of the JBoss EAP Security Architecture . 1.3.1.1. Adding a Management User Run the add-user utility script and follow the prompts. Note For Windows Server, use the EAP_HOME \bin\add-user.bat script. Press ENTER to select the default option a to add a management user. This user will be added to the ManagementRealm and will be authorized to perform management operations using the management console or management CLI. The other choice, b , adds a user to the ApplicationRealm , which is used for applications and provides no particular permissions. Enter the desired username and password. You will be prompted to confirm the password. Note User names can only contain the following characters, in any number and in any order: Alphanumeric characters (a-z, A-Z, 0-9) Dashes (-), periods (.), commas (,), at sign (@) Backslash (\) Equals (=) By default, JBoss EAP allows weak passwords but will issue a warning. See the Setting Add-User Utility Password Restrictions section of the JBoss EAP Configuration Guide for details on changing this default behavior. Enter a comma-separated list of groups to which the user belongs. If you do not want the user to belong to any groups, press ENTER to leave it blank. Review the information and enter yes to confirm. Determine whether this user represents a remote JBoss EAP server instance. For a basic management user, enter no . One type of user that may need to be added to the ManagementRealm is a user representing another instance of JBoss EAP, which must be able to authenticate to join as a member of a cluster. If this is the case, then answer yes to this prompt and you will be given a hashed secret value representing the user's password, which will need to be added to a different configuration file. Users can also be created non-interactively by passing parameters to the add-user script. This approach is not recommended on shared systems, because the passwords will be visible in log and history files. For more information, see Running the Add-User Utility Non-Interactively . 1.3.1.2. Running the Add-User Utility Non-Interactively You can run the add-user script non-interactively by passing in arguments on the command line. At a minimum, the username and password must be provided. Warning This approach is not recommended on shared systems, because the passwords will be visible in log and history files. Create a User Belonging to Multiple Groups The following command adds a management user, mgmtuser1 , with the guest and mgmtgroup groups. Specify an Alternative Properties File By default, user and group information created using the add-user script are stored in properties files located in the server configuration directory. User information is stored in the following properties files: EAP_HOME /standalone/configuration/mgmt-users.properties EAP_HOME /domain/configuration/mgmt-users.properties Group information is stored in the following properties files: EAP_HOME /standalone/configuration/mgmt-groups.properties EAP_HOME /domain/configuration/mgmt-groups.properties These default directories and properties file names can be overridden. The following command adds a new user, specifying a different name and location for the user properties files. The new user was added to the user properties files located at /path/to /standaloneconfig/newname.properties and /path/to /domainconfig/newname.properties . Note that these files must already exist or you will see an error. For a complete listing of all available add-user arguments and their purposes, use the --help argument or see the Add-user arguments section. 1.3.2. Management Interfaces 1.3.2.1. Management CLI The management command-line interface (CLI) is a command-line administration tool for JBoss EAP. Use the management CLI to start and stop servers, deploy and undeploy applications, configure system settings, and perform other administrative tasks. Operations can be performed in batch mode, allowing multiple tasks to be run as a group. Many common terminal commands are available, such as ls , cd , and pwd . The management CLI also supports tab completion. For detailed information on using the management CLI, including commands and operations, syntax, and running in batch mode, see the JBoss EAP Management CLI Guide . Launch the Management CLI Note For Windows Server, use the EAP_HOME \bin\jboss-cli.bat script. Connect to a Running Server Or you can launch the management CLI and connect in one step by using the EAP_HOME /bin/jboss-cli.sh --connect command. Display Help Use the following command for general help. Use the --help flag on a command to receive instructions on using that specific command. For instance, to receive information on using deploy , the following command is executed. Quit the Management CLI View System Settings The following command uses the read-attribute operation to display whether the example datasource is enabled. When running in a managed domain, you must specify which profile to update by preceding the command with /profile= PROFILE_NAME . Update System Settings The following command uses the write-attribute operation to disable the example datasource. Start Servers The management CLI can also be used to start and stop servers when running in a managed domain. 1.3.2.2. Management Console The management console is a web-based administration tool for JBoss EAP. Use the management console to start and stop servers, deploy and undeploy applications, tune system settings, and make persistent modifications to the server configuration. The management console also has the ability to perform administrative tasks, with live notifications when any changes performed by the current user require the server instance to be restarted or reloaded. In a managed domain, server instances and server groups in the same domain can be centrally managed from the management console of the domain controller. For a JBoss EAP instance running on the local host using the default management port, the management console can be accessed through a web browser at http://localhost:9990/console/index.html . You will need to authenticate with a user that has permissions to access the management console. The management console provides the following tabs for navigating and managing your JBoss EAP standalone server or managed domain. Home Learn how to accomplish several common configuration and management tasks. Take a tour to become familiar with the JBoss EAP management console. Deployments Add, remove, and enable deployments. In a managed domain, assign deployments to server groups. Configuration Configure available subsystems, which provide capabilities such as web services, messaging, or high availability. In a managed domain, manage the profiles that contain different subsystem configurations. Runtime View runtime information, such as server status, JVM usage, and server logs. In a managed domain, manage your hosts, server groups, and servers. Patching Apply patches to your JBoss EAP instances. Access Control Assign roles to users and groups when using Role-Based Access Control. 1.3.3. Configuration Files 1.3.3.1. Standalone Server Configuration Files The standalone configuration files are located in the EAP_HOME /standalone/configuration/ directory. A separate file exists for each of the five predefined profiles ( default , ha , full , full-ha , load-balancer ). Table 1.1. Standalone Configuration Files Configuration File Purpose standalone.xml This standalone configuration file is the default configuration that is used when you start your standalone server. It contains all information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. It does not provide the subsystems necessary for messaging or high availability. standalone-ha.xml This standalone configuration file includes all of the default subsystems and adds the modcluster and jgroups subsystems for high availability. It does not provide the subsystems necessary for messaging. standalone-full.xml This standalone configuration file includes all of the default subsystems and adds the messaging-activemq and iiop-openjdk subsystems. It does not provide the subsystems necessary for high availability. standalone-full-ha.xml This standalone configuration file includes support for every possible subsystem, including those for messaging and high availability. standalone-load-balancer.xml This standalone configuration file includes the minimum subsystems necessary to use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances. By default, starting JBoss EAP as a standalone server uses the standalone.xml file. To start JBoss EAP with a different configuration, use the --server-config argument. For example, 1.3.3.1.1. Update standalone server configuration using YAML files Using YAML files to configure your standalone server externalizes the customization process and improves the rate of server upgrades. When using this feature, the server starts in read-only mode. This means that changes to the configuration do not persist after the server is restarted. Note YAML configuration is not supported for servers in a managed domain. Users can modify various resources in the YAML files. The following resources are supported in YAML files: core-service interface socket-binding-group subsystem system-property The following resources are not supported in YAML files: extension : Adds an extension to the server. This element is not supported because it might require modules that are missing. deployment : Adds deployments to the server. This element is not supported because it requires more extensive changes in addition to configuration. deployment-overlay : Adds deployment-overlays to the server. This element is not supported because it requires more extensive changes in addition to configuration. path : Already defined when the YAML files are parsed. The YAML root node is wildfly-configuration . You can follow the model tree to modify resources. If a resource already exists (created by the XML configuration file or a YAML file), you can update it using the model tree. If the resource does not exist, you can create it using the model tree. Example YAML configuration file defining a new PostGresql datasource The above example defines a jdbc-driver called postgresql and a data-source called PostgreSQLDS . Note You cannot use the YAML configuration file to manage modules. Instead, you need to create or provision the org.postgresql.jdbc module manually or using the management CLI. 1.3.3.1.2. YAML file operations using tags You can perform several operations on YAML configuration files using tags. !undefine : undefine an attribute Undefine CONSOLE logger level YAML configuration file example !remove : remove the resource Remove embedded Artemis broker and connect to a remote broker YAML configuration file example !list-add : Add an element to a list (with an optional index) Add a RemoteTransactionPermission to a permissions list YAML configuration file example Note If an index attribute is not defined, the entry is appended to the end of the list. 1.3.3.1.3. Starting a standalone server using YAML files You can start a standalone server using YAML configuration files. Procedure Open your terminal. Use the following command to start a standalone server with YAML files: The --yaml or -y argument allows you to pass a list of YAML files. You must separate each YAML file path using a semicolon (;) for Windows Server or a colon (:) for Mac and Unix-based operating systems. You can use an absolute path, a path relative to the current execution directory, or a path relative to the standalone configuration directory. The operations are applied in the order that the files are defined and after the initial operations are defined by the XML configuration. 1.3.3.2. Managed Domain Configuration Files The managed domain configuration files are located in the EAP_HOME /domain/configuration/ directory. Table 1.2. Managed Domain Configuration Files Configuration File Purpose domain.xml This is the main configuration file for a managed domain. Only the domain master reads this file. This file contains the configurations for all of the profiles ( default , ha , full , full-ha , load-balancer ). host.xml This file includes configuration details specific to a physical host in a managed domain, such as network interfaces, socket bindings, the name of the host, and other host-specific details. The host.xml file includes all of the features of both host-master.xml and host-slave.xml , which are described below. host-master.xml This file includes only the configuration details necessary to run a server as the master domain controller. host-slave.xml This file includes only the configuration details necessary to run a server as a managed domain host controller. By default, starting JBoss EAP in a managed domain uses the host.xml file. To start JBoss EAP with a different configuration, use the --host-config argument. For example, 1.3.3.3. Backing Up Configuration Data In order to later restore the JBoss EAP server configuration, items in the following locations should be backed up: EAP_HOME /standalone/configuration/ Back up the entire directory to save user data, server configuration, and logging settings for standalone servers. EAP_HOME /domain/configuration/ Back up the entire directory to save user and profile data, domain and host configuration, and logging settings for managed domains. EAP_HOME /modules/ Back up any custom modules. EAP_HOME /welcome-content/ Back up any custom welcome content. EAP_HOME /bin/ Back up any custom scripts or startup configuration files. 1.3.3.4. Configuration File Snapshots To assist in the maintenance and management of the server, JBoss EAP creates a timestamped version of the original configuration file at the time of startup. Any additional configuration changes made by management operations will result in the original file being automatically backed up, and a working copy of the instance being preserved for reference and rollback. Additionally, configuration snapshots can be taken, which are point-in-time copies of the current server configuration. These snapshots can be saved and loaded by an administrator. The following examples use the standalone.xml file, but the same process applies to the domain.xml and host.xml files. Take a Snapshot Use the management CLI to take a snapshot of the current configurations. List Snapshots Use the management CLI to list all snapshots that have been taken. Delete a Snapshot Use the management CLI to delete a snapshot. Start the Server with a Snapshot The server can be started using a snapshot or an automatically-saved version of the configuration. Navigate to the EAP_HOME /standalone/configuration/standalone_xml_history directory and identify the snapshot or saved configuration file to be loaded. Start the server and point to the selected configuration file. Pass in the file path relative to the configuration directory, EAP_HOME /standalone/configuration/ . Note When running in a managed domain, use the --host-config argument instead to specify the configuration file. 1.3.3.5. Property Replacement JBoss EAP allows you to use expressions to define replaceable properties in place of literal values in the configuration. Expressions use the format USD{ PARAMETER : DEFAULT_VALUE } . If the specified parameter is set, then the parameter's value will be used. Otherwise, the default value provided will be used. The supported sources for resolving expressions are system properties, environment variables, and the vault. For deployments only, the source can be properties listed in a META-INF/jboss.properties file in the deployment archive. For deployment types that support subdeployments, the resolution is scoped to all subdeployments if the properties file is in the outer deployment, for example the EAR. If the properties file is in the subdeployment, then the resolution is scoped just to that subdeployment. The example below from the standalone.xml configuration file sets the inet-address for the public interface to 127.0.0.1 unless the jboss.bind.address parameter is set. <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> The jboss.bind.address parameter can be set when starting EAP as a standalone server with the following command: Nested Expressions Expressions can be nested, which allows for more advanced use of expressions in place of fixed values. The format of a nested expression is like that of a normal expression, but one expression is embedded in the other, for example: Nested expressions are evaluated recursively, so the inner expression is first evaluated, then the outer expression is evaluated. Expressions may also be recursive, where an expression resolves to another expression, which is then resolved. Nested expressions are permitted anywhere that expressions are permitted, with the exception of management CLI commands. An example of where a nested expression might be used is if the password used in a datasource definition is masked. The configuration for the datasource might have the following line: <password>USD{VAULT::ds_ExampleDS::password::1}</password> The value of ds_ExampleDS could be replaced with a system property ( datasource_name ) using a nested expression. The configuration for the datasource could instead have the following line: <password>USD{VAULT::USD{datasource_name}::password::1}</password> JBoss EAP would first evaluate the expression USD{datasource_name} , then input this to the larger expression and evaluate the resulting expression. The advantage of this configuration is that the name of the datasource is abstracted from the fixed configuration. Descriptor-Based Property Replacement Application configuration, such as datasource connection parameters, typically varies between development, testing, and production environments. This variance is sometimes accommodated by build system scripts, as the Jakarta EE specification does not contain a method to externalize these configurations. With JBoss EAP, you can use descriptor-based property replacement to manage configuration externally. Descriptor-based property replacement substitutes properties based on descriptors, allowing you to remove assumptions about the environment from the application and the build chain. Environment-specific configurations can be specified in deployment descriptors rather than annotations or build system scripts. You can provide configuration in files or as parameters at the command line. There are several flags in the ee subsystem that control whether property replacement is applied. JBoss-specific descriptor replacement is controlled by the jboss-descriptor-property-replacement flag and is enabled by default. When enabled, properties can be replaced in the following deployment descriptors: jboss-ejb3.xml jboss-app.xml jboss-web.xml jboss-permissions.xml *-jms.xml *-ds.xml The following management CLI command can be used to enable or disable property replacement in JBoss-specific descriptors: Jakarta EE descriptor replacement controlled by the spec-descriptor-property-replacement flag and is disabled by default. When enabled, properties can be replaced in the following deployment descriptors: ejb-jar.xml permissions.xml persistence.xml application.xml web.xml The following management CLI command can be used to enable or disable property replacement in Jakarta EE descriptors: 1.4. Network and port configuration JBoss EAP JBoss EAP comes with interfaces, socket bindings, and IPv6 addresses to help make the configuration easier. Use the following detailed information about each of these network and port configurations to run JBoss EAP successfully. 1.4.1. Interfaces JBoss EAP references named interfaces throughout the configuration. You can configure JBoss EAP to reference individual interface declarations with logical names rather than requiring the full details of the interface at each use. You can also experience easier configuration in a managed domain where network interface details can vary across multiple machines. Each server instance can correspond to a logical name group. The standalone.xml , domain.xml , and host.xml files all include interface declarations. There are several preconfigured interface names, depending on which default configuration is used. The management interface can be used for all components and services that require the management layer, including the HTTP management endpoint. The public interface can be used for all application-related network communications. The unsecure interface is used for IIOP sockets in the standard configuration. The private interface is used for JGroups sockets in the standard configuration. 1.4.1.1. Default interface configurations JBoss EAP includes the following four default interfaces: <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> <interface name="private"> <inet-address value="USD{jboss.bind.address.private:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="USD{jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> By default, JBoss EAP binds these interfaces to 127.0.0.1 , but these values can be overridden at runtime by setting the appropriate property. For example, the inet-address of the public interface can be set when starting JBoss EAP as a standalone server with the following command. Alternatively, you can use the -b switch on the server start command line. Important If you modify the default network interfaces or ports that JBoss EAP uses, you must also remember to change any scripts that use the modified interfaces or ports. These include JBoss EAP service scripts, as well as remembering to specify the correct interface and port when accessing the management console or management CLI. Additional resources For more information about server start options, see Server Runtime Arguments . 1.4.1.2. Optional interface configurations Network interfaces are declared by specifying a logical name and selection criteria for the physical interface. The selection criteria can reference a wildcard address or specify a set of one or more characteristics that an interface or address must have in order to be a valid match. Interfaces can be configured using the management console or the management CLI. Below are several examples of adding and updating interfaces. The management CLI command is shown first, followed by the corresponding configuration XML. Additional resources For a listing of all available interface selection criteria, see the Interface Attributes section. 1.4.1.2.1. Interface with a NIC value You can use the following example to add a new interface with a NIC value of eth0 . <interface name="external"> <nic name="eth0"/> </interface> 1.4.1.2.2. Interface with several conditional values You can use the following example to add a new interface that matches any interface or address on the correct subnet if it is running, supports multicast, and is not point-to-point. <interface name="default"> <subnet-match value="192.168.0.0/16"/> <up/> <multicast/> <not> <point-to-point/> </not> </interface> 1.4.1.2.3. Updates to an interface attribute In this example, you can update the public interface's default inet-address value, keeping the jboss.bind.address property so that you can set this value at runtime. <interface name="public"> <inet-address value="USD{jboss.bind.address:192.168.0.0}"/> </interface> 1.4.1.2.4. Additional interfaces to a server in a managed domain You can add more interfaces to a server in a managed domain using the following code. <servers> <server name=" SERVER_NAME " group="main-server-group"> <interfaces> <interface name=" INTERFACE_NAME "> <inet-address value="127.0.0.1"/> </interface> </interfaces> </server> </servers> 1.4.2. Socket bindings Socket bindings and socket binding groups allow you to define network ports and their relationship to the networking interfaces required for your JBoss EAP configuration. A socket binding is a named configuration for a socket. A socket binding group is a collection of socket binding declarations that are grouped under a logical name. This allows other sections of the configuration to reference socket bindings by their logical name, rather than requiring the full details of the socket configuration at each use. The declarations for these named configurations can be found in the standalone.xml and domain.xml configuration files. A standalone server contains only one socket binding group, while a managed domain can contain multiple groups. You can create a socket binding group for each server group in the managed domain, or share a socket binding group between multiple server groups. The ports JBoss EAP uses by default depend on which socket binding groups are used and the requirements of your individual deployments. There are three types of socket bindings that can be defined in a socket binding group in the JBoss EAP configuration: Inbound Socket Bindings The socket-binding element is used to configure inbound socket bindings for the JBoss EAP server. The default JBoss EAP configurations provide several preconfigured socket-binding elements, for example, for HTTP and HTTPS traffic. Another example can be found in the Broadcast Groups section of Configuring Messaging for JBoss EAP. Remote Outbound Socket Bindings The remote-destination-outbound-socket-binding element is used to configure outbound socket bindings for destinations that are remote to the JBoss EAP server. The default JBoss EAP configurations provide an example remote destination socket binding that can be used for a mail server. Local Outbound Socket Bindings The local-destination-outbound-socket-binding element is used to configure outbound socket bindings for destinations that are local to the JBoss EAP server. This type of socket binding is not expected to be commonly used. Attributes for this element can be found in the Local Outbound Socket Binding Attributes table. Additional resources To view attributes for inbound socket bindings, refer to the Inbound Socket Binding Attributes table. To view attributes for remote outbound socket bindings, refer to the Remote Outbound Socket Binding Attributes table. For additional examples of remote outbound socket bindings, refer to the Using the Integrated Artemis Resource Adapter for Remote Connections section of Configuring Messaging for JBoss EAP. To view attributes for local outbound socket bindings, refer to the Local Outbound Socket Binding Attributes table. 1.4.2.1. Management ports Management ports were consolidated in JBoss EAP 7. By default, JBoss EAP 7 uses port 9990 for both native management, used by the management CLI, and HTTP management, used by the web-based management console. Port 9999 , which was used as the native management port in JBoss EAP 6, is no longer used but can still be enabled if desired. If HTTPS is enabled for the management console, then port 9993 is used by default. 1.4.2.2. Default socket bindings JBoss EAP ships with a socket binding group for each of the five predefined profiles ( default , ha , full , full-ha , load-balancer ). Important If you modify the default network interfaces or ports that JBoss EAP uses, you must also remember to change any scripts that use the modified interfaces or ports. These include JBoss EAP service scripts, as well as remembering to specify the correct interface and port when accessing the management console or management CLI. Additional resources For detailed information about the default socket bindings, such as default ports and descriptions, see the Default Socket Bindings section. 1.4.2.2.1. Standalone server When running as a standalone server, only one socket binding group is defined per configuration file. Each standalone configuration file ( standalone.xml , standalone-ha.xml , standalone-full.xml , standalone-full-ha.xml , standalone-load-balancer.xml ) defines socket bindings for the technologies used by its corresponding profile. For example, the default standalone configuration file ( standalone.xml ) specifies the below socket bindings. <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="management-http" interface="management" port="USD{jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="USD{jboss.management.https.port:9993}"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="USD{jboss.mail.server.host:localhost}" port="USD{jboss.mail.server.port:25}"/> </outbound-socket-binding> </socket-binding-group> 1.4.2.2.2. Managed domain When running in a managed domain, all socket binding groups are defined in the domain.xml file. There are five predefined socket binding groups: standard-sockets ha-sockets full-sockets full-ha-sockets load-balancer-sockets Each socket binding group specifies socket bindings for the technologies used by its corresponding profile. For example, the full-ha-sockets socket binding group defines several jgroups socket bindings, which are used by the full-ha profile for high availability. <socket-binding-groups> <socket-binding-group name="standard-sockets" default-interface="public"> <!-- Needed for server groups using the 'default' profile --> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name="ha-sockets" default-interface="public"> <!-- Needed for server groups using the 'ha' profile --> ... </socket-binding-group> <socket-binding-group name="full-sockets" default-interface="public"> <!-- Needed for server groups using the 'full' profile --> ... </socket-binding-group> <socket-binding-group name="full-ha-sockets" default-interface="public"> <!-- Needed for server groups using the 'full-ha' profile --> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="iiop" interface="unsecure" port="3528"/> <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/> <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" interface="private" port="7600"/> <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name="load-balancer-sockets" default-interface="public"> <!-- Needed for server groups using the 'load-balancer' profile --> ... </socket-binding-group> </socket-binding-groups> Note The socket configuration for the management interfaces is defined in the domain controller's host.xml file. 1.4.2.3. Configuring socket bindings When defining a socket binding, you can configure the port and interface attributes, as well as multicast settings such as multicast-address and multicast-port . For details on all available socket bindings attributes, see the Socket Binding Attributes section. Procedure Socket bindings can be configured using the management console or the management CLI. The following steps go through adding a socket binding group, adding a socket binding, and configuring socket binding settings using the management CLI. Add a new socket binding group. Note This step cannot be performed when running as a standalone server. Add a socket binding. Change the socket binding to use an interface other than the default, which is set by the socket binding group. The following example shows how the XML configuration may look after the above steps have been completed. <socket-binding-groups> ... <socket-binding-group name="new-sockets" default-interface="public"> <socket-binding name="new-socket-binding" interface="unsecure" port="1234"/> </socket-binding-group> </socket-binding-groups> 1.4.2.4. Port offsets A port offset is a numeric offset value added to all port values specified in the socket binding group for that server. This allows the server to inherit the port values defined in its socket binding group, with an offset to ensure that it does not conflict with any other servers on the same host. For instance, if the HTTP port of the socket binding group is 8080 , and a server uses a port offset of 100 , then its HTTP port is 8180 . Below is an example of setting a port offset of 250 for a server in a managed domain using the management CLI. Port offsets can be used for servers in a managed domain and for running multiple standalone servers on the same host. You can pass in a port offset when starting a standalone server using the jboss.socket.binding.port-offset property. 1.4.3. IPv6 addresses By default, JBoss EAP is configured to run using IPv4 addresses. The following procedures describe how to configure JBoss EAP to run using IPv6 addresses. 1.4.3.1. Configuring the JVM Stack for IPv6 Addresses You can configure your JBoss EAP to run using IPv6. Procedure To update your start-up configuration to run on IPv6 addresses, complete the following steps. Open the startup configuration file. When running as a standalone server, edit the EAP_HOME /bin/standalone.conf file (or standalone.conf.bat for Windows Server). When running in a managed domain, edit the EAP_HOME /bin/domain.conf file (or domain.conf.bat for Windows Server). Set the java.net.preferIPv4Stack property to false . Append the java.net.preferIPv6Addresses property and set it to true . The following example shows how the JVM options in the startup configuration file may look after making the above changes. # Specify options to pass to the Java VM. # if [ "xUSDJAVA_OPTS" = "x" ]; then JAVA_OPTS="-Xms1303m -Xmx1303m -Djava.net.preferIPv4Stack=false" JAVA_OPTS="USDJAVA_OPTS -Djboss.modules.system.pkgs=USDJBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true" JAVA_OPTS="USDJAVA_OPTS -Djava.net.preferIPv6Addresses=true" else 1.4.3.2. Default interface values updated to IPv6 addresses The default interface values in the configuration can be changed to IPv6 addresses. For example, the following management CLI command sets the management interface to the IPv6 loopback address ( ::1 ). After running the command, the following example shows how the XML configuration might look. <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:[::1]}"/> </interface> .... </interfaces> 1.5. Optimization of the JBoss EAP server configuration Once you have installed the JBoss EAP server , and you have created a management user , Red Hat recommends that you optimize your server configuration. Make sure you review information in the Performance Tuning Guide for information about how to optimize the server configuration to avoid common problems when deploying applications in a production environment. Common optimizations include setting ulimits , enabling garbage collection , creating Java heap dumps , and adjusting the thread pool size . It is also a good idea to apply any existing patches for your release of the product. Each patch for EAP contains numerous bug fixes. For more information, see Patching JBoss EAP in the Patching and Upgrading Guide for JBoss EAP.
[ "unzip jboss-eap-7.4.0.zip", "EAP_HOME /bin/standalone.sh", "EAP_HOME /bin/domain.sh", "EAP_HOME /bin/jboss-cli.sh --connect", "shutdown", "EAP_HOME /bin/add-user.sh", "EAP_HOME /bin/add-user.sh -u 'mgmtuser1' -p 'password1!' -g 'guest,mgmtgroup'", "EAP_HOME /bin/add-user.sh -u 'mgmtuser2' -p 'password1!' -sc ' /path/to /standaloneconfig/' -dc ' /path/to /domainconfig/' -up 'newname.properties'", "EAP_HOME /bin/jboss-cli.sh", "connect", "help", "deploy --help", "quit", "/subsystem=datasources/data-source=ExampleDS:read-attribute(name=enabled) { \"outcome\" => \"success\", \"result\" => true }", "/profile=default/subsystem=datasources/data-source=ExampleDS:read-attribute(name=enabled)", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=enabled,value=false)", "/host= HOST_NAME /server-config=server-one:start", "EAP_HOME /bin/standalone.sh --server-config=standalone-full.xml", "wildfly-configuration: subsystem: datasources: jdbc-driver: postgresql: driver-name: postgresql driver-xa-datasource-class-name: org.postgresql.xa.PGXADataSource driver-module-name: org.postgresql.jdbc data-source: PostgreSQLDS: enabled: true exception-sorter-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter jndi-name: java:jboss/datasources/PostgreSQLDS jta: true max-pool-size: 20 min-pool-size: 0 connection-url: \"jdbc:postgresql://localhost:5432}/demo\" driver-name: postgresql user-name: postgres password: postgres validate-on-match: true background-validation: false background-validation-millis: 10000 flush-strategy: FailingConnectionOnly statistics-enable: false stale-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.novendor.NullStaleConnectionChecker valid-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker transaction-isolation: TRANSACTION_READ_COMMITTED", "wildfly-configuration: subsystem: logging: console-handler: CONSOLE: level: !undefine", "wildfly-configuration: socket-binding-group: standard-sockets: remote-destination-outbound-socket-binding: remote-artemis: host: localhost port: 61616 subsystem: messaging-activemq: server: default: !remove remote-connector: artemis: socket-binding: remote-artemis pooled-connection-factory: RemoteConnectionFactory: connectors: - artemis entries: - \"java:jboss/RemoteConnectionFactory\" - \"java:jboss/exported/jms/RemoteConnectionFactory\" enable-amq1-prefix: false user: admin password: admin ejb3: default-resource-adapter-name: RemoteConnectionFactory ee: service: default-bindings: jms-connection-factory: \"java:jboss/RemoteConnectionFactory\"", "wildfly-configuration: subsystem: elytron: permission-set: default-permissions: permissions: !list-add - class-name: org.wildfly.transaction.client.RemoteTransactionPermission module: org.wildfly.transaction.client target-name: \"*\" index: 0", "./standalone.sh -y=/home/ehsavoie/dev/wildfly/config2.yml:config.yml -c standalone-full.xml", "EAP_HOME /bin/domain.sh --host-config=host-master.xml", ":take-snapshot { \"outcome\" => \"success\", \"result\" => \" EAP_HOME /standalone/configuration/standalone_xml_history/snapshot/20151022-133109702standalone.xml\" }", ":list-snapshots { \"outcome\" => \"success\", \"result\" => { \"directory\" => \" EAP_HOME /standalone/configuration/standalone_xml_history/snapshot\", \"names\" => [ \"20151022-133109702standalone.xml\", \"20151022-132715958standalone.xml\" ] } }", ":delete-snapshot(name=20151022-133109702standalone.xml)", "EAP_HOME /bin/standalone.sh --server-config=standalone_xml_history/snapshot/20151022-133109702standalone.xml", "<interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address= IP_ADDRESS", "USD{ SYSTEM_VALUE_1 USD{ SYSTEM_VALUE_2 }}", "<password>USD{VAULT::ds_ExampleDS::password::1}</password>", "<password>USD{VAULT::USD{datasource_name}::password::1}</password>", "/subsystem=ee:write-attribute(name=\"jboss-descriptor-property-replacement\",value= VALUE )", "/subsystem=ee:write-attribute(name=\"spec-descriptor-property-replacement\",value= VALUE )", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address= IP_ADDRESS", "/interface=external:add(nic=eth0)", "<interface name=\"external\"> <nic name=\"eth0\"/> </interface>", "/interface=default:add(subnet-match=192.168.0.0/16,up=true,multicast=true,not={point-to-point=true})", "<interface name=\"default\"> <subnet-match value=\"192.168.0.0/16\"/> <up/> <multicast/> <not> <point-to-point/> </not> </interface>", "/interface=public:write-attribute(name=inet-address,value=\"USD{jboss.bind.address:192.168.0.0}\")", "<interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:192.168.0.0}\"/> </interface>", "/host= HOST_NAME /server-config= SERVER_NAME /interface= INTERFACE_NAME :add(inet-address=127.0.0.1)", "<servers> <server name=\" SERVER_NAME \" group=\"main-server-group\"> <interfaces> <interface name=\" INTERFACE_NAME \"> <inet-address value=\"127.0.0.1\"/> </interface> </interfaces> </server> </servers>", "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"management-http\" interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <socket-binding name=\"management-https\" interface=\"management\" port=\"USD{jboss.management.https.port:9993}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"USD{jboss.mail.server.host:localhost}\" port=\"USD{jboss.mail.server.port:25}\"/> </outbound-socket-binding> </socket-binding-group>", "<socket-binding-groups> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'default' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'ha' profile --> </socket-binding-group> <socket-binding-group name=\"full-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full' profile --> </socket-binding-group> <socket-binding-group name=\"full-ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full-ha' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"iiop\" interface=\"unsecure\" port=\"3528\"/> <socket-binding name=\"iiop-ssl\" interface=\"unsecure\" port=\"3529\"/> <socket-binding name=\"jgroups-mping\" interface=\"private\" port=\"0\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45700\"/> <socket-binding name=\"jgroups-tcp\" interface=\"private\" port=\"7600\"/> <socket-binding name=\"jgroups-udp\" interface=\"private\" port=\"55200\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45688\"/> <socket-binding name=\"modcluster\" port=\"0\" multicast-address=\"224.0.1.105\" multicast-port=\"23364\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"load-balancer-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'load-balancer' profile --> </socket-binding-group> </socket-binding-groups>", "/socket-binding-group=new-sockets:add(default-interface=public)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:add(port=1234)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:write-attribute(name=interface,value=unsecure)", "<socket-binding-groups> <socket-binding-group name=\"new-sockets\" default-interface=\"public\"> <socket-binding name=\"new-socket-binding\" interface=\"unsecure\" port=\"1234\"/> </socket-binding-group> </socket-binding-groups>", "/host=master/server-config=server-two/:write-attribute(name=socket-binding-port-offset,value=250)", "EAP_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset=100", "-Djava.net.preferIPv4Stack=false", "-Djava.net.preferIPv6Addresses=true", "Specify options to pass to the Java VM. # if [ \"xUSDJAVA_OPTS\" = \"x\" ]; then JAVA_OPTS=\"-Xms1303m -Xmx1303m -Djava.net.preferIPv4Stack=false\" JAVA_OPTS=\"USDJAVA_OPTS -Djboss.modules.system.pkgs=USDJBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true\" JAVA_OPTS=\"USDJAVA_OPTS -Djava.net.preferIPv6Addresses=true\" else", "/interface=management:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.management:[::1]}\")", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:[::1]}\"/> </interface> . </interfaces>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_guide/administering_jboss_eap
Chapter 3. OpenShift CLI Manager
Chapter 3. OpenShift CLI Manager 3.1. CLI Manager Operator overview Important Using the CLI Manager Operator to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.1. About the CLI Manager Operator The CLI Manager Operator makes it easier to install and update CLI plugins. It runs in both connected and disconnected environments, and it is particularly useful in disconnected environments. Cluster administrators can add CLI plugins and plugin updates to the CLI Manager Operator, and users can then install and update CLI plugins when needed regardless of whether or not the environment is disconnected. 3.2. CLI Manager Operator release notes With the CLI Manager Operator, you can install CLI plugins in both connected and disconnected environments. Important Using the CLI Manager Operator to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . These release notes track the development of the CLI Manager Operator for OpenShift Container Platform. For more information about the CLI Manager Operator, see About the CLI Manager Operator . 3.2.1. CLI Manager Operator 0.1.0 (Technology Preview) Issued: 19 November 2024 The following advisory is available for the CLI Manager Operator 0.1.0: RHEA-2024:8303 3.2.1.1. New features and enhancements This version is the initial Technology Preview release of the CLI Manager Operator. For installation information, see Installing the CLI Manager Operator . 3.3. Installing the CLI Manager Operator Note Krew always works with OpenShift CLI ( oc ) without the CLI Manager Operator installed. You can use the same commands outlined in this documentation to use Krew with oc . For more information, see Krew documentation . You can run the CLI Manager Operator in both connected and disconnected environments. In particular, it eases the installation and management of CLI plugins in disconnected environments. The CLI Manager Operator makes Krew compatible with the oc CLI. Cluster administrators can use the CLI Manager Operator to add CLI plugin custom resources that can then be accessed in both connected and disconnected environments. Cluster administrators install and configure the CLI Manager Operator, and users then add the custom index to Krew and add CLI plugins to the CLI Manager Operator. Important Using the CLI Manager Operator to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.3.1. Installing the CLI Manager Operator Install the CLI Manager Operator to facilitate adding CLI plugins in both connected and disconnected environments. Prerequisites Krew is installed . You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the CLI Manager Operator: Navigate to Administration Namespaces and click Create Namespace . In the Name field, enter openshift-cli-manager-operator and click Create . Install the CLI Manager Operator: Navigate to Operators OperatorHub . In the filter box, enter CLI Manager Operator . Select the CLI Manager Operator and click Install . On the Install Operator page, complete the following steps: Ensure that the Update channel is set to tech preview , which installs the latest Technology Preview release of the CLI Manager Operator. From the drop-down menu, select A specific namespace on the cluster and select openshift-cli-manager-operator . Click Install . Create the CliManager resource by completing the following steps: Navigate to Installed Operators . Select CLI Manager Operator . Select the CLI Manager tab. Click Create CliManager . Use the default Name . Click Create . The new CliManager resource is listed in the CLI Manager tab. Verification Navigate to Operators Installed Operators . Verify that CLI Manager Operator is listed with a Status of Succeeded . 3.3.2. Adding the CLI Manager Operator custom index to Krew You can use the terminal to add the CLI manager custom index to Krew. This procedure is required for the CLI Manager Operator to function correctly and needs to be done only once. The custom index connects Krew to the CLI Manager Operator binaries and enables the CLI Manager Operator to work in disconnected environments. Note If you use self-signed certificates, mark the certificate as trusted on your local operating system to use Krew. Prerequisites Krew is installed . The CLI Manager Operator is installed. Procedure To establish the ROUTE variable, enter the following command: USD ROUTE=USD(oc get route/openshift-cli-manager -n openshift-cli-manager-operator -o=jsonpath='{.spec.host}') To add the custom index to Krew, enter the following command: USD oc krew index add <custom_index_name> https://USDROUTE/cli-manager To update Krew, enter the following command and check for any errors: USD oc krew update Example output Updated the local copy of plugin index. Updated the local copy of plugin index <custom_index_name>. New plugins available: * ocp/<plugin_name> 3.3.3. Adding a plugin to the CLI Manager Operator You can add a CLI plugin to the CLI Manager Operator by using the YAML View. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The CLI Manager Operator is installed. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . From the list, select CLI Manager Operator . Select the CLI Plugin tab. Click Create Plugin . In the text box, enter the information for the plugin you are installing. See the following example YAML file. Example YAML file to add a plugin apiVersion: config.openshift.io/v1alpha1 kind: Plugin metadata: name: <plugin_name> 1 spec: description: <description_of_plugin> homepage: <plugin_homepage> platforms: - bin: 2 files: - from: <plugin_file_path> to: . image: <plugin_image> imagePullSecret: 3 platform: <platform> 4 shortDescription: <short_description_of_plugin> version: <version> 5 1 The name of the plugin you plan to use in commands. 2 Bin specifies the path to the plugin executable. 3 Optional: If the registry is not public, add a pull secret to access your plugin image. 4 Add the architecture for your system; for example, linux/amd64 , darwin/arm64 , windows/amd64 , or another architecture. 5 Version must be in v0.0.0 format. Click Save . Verification Enter the following command to see if the plugin is listed and has been added successfully: USD oc get plugin/<plugin_name> -o yaml Example output <plugin_name> ready to be served. 3.4. Using the CLI Manager Operator After the cluster administrator sets up and configures the CLI Manager Operator, users can use it to install, update, and uninstall CLI plugins. Important Using the CLI Manager Operator to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.4.1. Installing CLI plugins with the CLI Manager Operator You can install CLI plugins using the CLI Manager Operator. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. The CLI Manager is installed. The CLI Manager custom index has been added to Krew. You are using OpenShift Container Platform 4.17 or later. Procedure To list all available plugins, run the following command: USD oc krew search To get information about a plugin, run the following command: USD oc krew info <plugin_name> To install a plugin, run the following command: USD oc krew install <plugin_name> To list all plugins that were installed by Krew, run the following command: USD oc krew list 3.4.2. Upgrading a plugin with the CLI Manager Operator You can upgrade a CLI plugin to a newer version with the CLI Manager Operator. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The CLI Manager Operator is installed. The plugin you are upgrading is installed. Procedure Using the CLI, enter the following command: oc edit plugin/<plugin_name> Edit the YAML file to include the new specifications for your plugin. Example YAML file to upgrade a plugin apiVersion: config.openshift.io/v1alpha1 kind: Plugin metadata: name: <plugin_name> 1 spec: description: <description_of_plugin> homepage: <plugin_homepage> platforms: - bin: 2 files: - from: <plugin_file_path> to: . image: <plugin_image> imagePullSecret: 3 platform: <platform> 4 shortDescription: <short_description_of_plugin> version: <version> 5 1 The name of the plugin you plan to use in commands. 2 Bin specifies the path to the plugin executable. 3 Optional: If the registry is not public, add a pull secret to access your plugin image. 4 Add the architecture for your system platform; for example, linux/amd64 , darwin/arm64 , windows/amd64 , or another architecture. 5 Version of the plugin, in v0.0.0 format. Save the file. 3.4.3. Updating CLI plugins with the CLI Manager Operator You can update a plugin that was installed for the OpenShift CLI ( oc ) with the CLI Manager Operator. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. The CLI Manager Operator is installed. The custom index has been added to Krew by the cluster administrator. The plugin updates have been added to the CLI Manager Operator by the cluster administrator. The plugin you are updating is already installed. Procedure To update a single plugin, run the following command: USD oc krew upgrade <plugin_name> To update all plugins that were installed by Krew, run the following command: USD oc krew upgrade 3.4.4. Uninstalling a CLI plugin with the CLI Manager Operator You can uninstall a plugin that was installed for the OpenShift CLI ( oc ) with the CLI Manager Operator. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with the CLI Manager Operator. Procedure To uninstall a plugin, run the following command: USD oc krew uninstall <plugin_name> 3.5. Uninstalling the CLI Manager Operator You can remove the CLI Manager Operator from OpenShift Container Platform by uninstalling the CLI Manager Operator and removing its related resources. Important Using the CLI Manager Operator to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.5.1. Uninstalling the CLI Manager Operator You can uninstall the CLI Manager Operator by using the web console. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The CLI Manager Operator is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the CLI Manager Operator by completing the following steps: Navigate to Operators Installed Operators . Click the Options menu to the CLI Manager Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 3.5.2. Removing CLI Manager Operator resources Optionally, after you uninstall the CLI Manager Operator, you can remove its related resources from your cluster. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove the openshift-cli-manager-operator namespace: Navigate to Administration Namespaces . Click the Options menu to the openshift-cli-manager-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-cli-manager-operator in the field and click Delete .
[ "ROUTE=USD(oc get route/openshift-cli-manager -n openshift-cli-manager-operator -o=jsonpath='{.spec.host}')", "oc krew index add <custom_index_name> https://USDROUTE/cli-manager", "oc krew update", "Updated the local copy of plugin index. Updated the local copy of plugin index <custom_index_name>. New plugins available: * ocp/<plugin_name>", "apiVersion: config.openshift.io/v1alpha1 kind: Plugin metadata: name: <plugin_name> 1 spec: description: <description_of_plugin> homepage: <plugin_homepage> platforms: - bin: 2 files: - from: <plugin_file_path> to: . image: <plugin_image> imagePullSecret: 3 platform: <platform> 4 shortDescription: <short_description_of_plugin> version: <version> 5", "oc get plugin/<plugin_name> -o yaml", "<plugin_name> ready to be served.", "oc krew search", "oc krew info <plugin_name>", "oc krew install <plugin_name>", "oc krew list", "edit plugin/<plugin_name>", "apiVersion: config.openshift.io/v1alpha1 kind: Plugin metadata: name: <plugin_name> 1 spec: description: <description_of_plugin> homepage: <plugin_homepage> platforms: - bin: 2 files: - from: <plugin_file_path> to: . image: <plugin_image> imagePullSecret: 3 platform: <platform> 4 shortDescription: <short_description_of_plugin> version: <version> 5", "oc krew upgrade <plugin_name>", "oc krew upgrade", "oc krew uninstall <plugin_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-manager
Appendix G. Object Storage Daemon (OSD) configuration options
Appendix G. Object Storage Daemon (OSD) configuration options The following are Ceph Object Storage Daemon (OSD) configuration options that can be set during deployment. You can set these configuration options with the ceph config set osd CONFIGURATION_OPTION VALUE command. osd_uuid Description The universally unique identifier (UUID) for the Ceph OSD. Type UUID Default The UUID. Note The osd uuid applies to a single Ceph OSD. The fsid applies to the entire cluster. osd_data Description The path to the OSD's data. You must create the directory when deploying Ceph. Mount a drive for OSD data at this mount point. Type String Default /var/lib/ceph/osd/USDcluster-USDid osd_max_write_size Description The maximum size of a write in megabytes. Type 32-bit Integer Default 90 osd_client_message_size_cap Description The largest client data message allowed in memory. Type 64-bit Integer Unsigned Default 500MB default. 500*1024L*1024L osd_class_dir Description The class path for RADOS class plug-ins. Type String Default USDlibdir/rados-classes osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD. Type 32-bit Int Default 1 osd_scrub_thread_timeout Description The maximum time in seconds before timing out a scrub thread. Type 32-bit Integer Default 60 osd_scrub_finalize_thread_timeout Description The maximum time in seconds before timing out a scrub finalize thread. Type 32-bit Integer Default 60*10 osd_scrub_begin_hour Description This restricts scrubbing to this hour of the day or later. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Along with osd_scrub_end_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_end_hour Description This restricts scrubbing to the hour earlier than this. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Along with osd_scrub_begin_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_load_threshold Description The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5 . Type Float Default 0.5 osd_scrub_min_interval Description The minimum interval in seconds for scrubbing the Ceph OSD when the Red Hat Ceph Storage cluster load is low. Type Float Default Once per day. 60*60*24 osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD irrespective of cluster load. Type Float Default Once per week. 7*60*60*24 osd_scrub_interval_randomize_ratio Description Takes the ratio and randomizes the scheduled scrub between osd scrub min interval and osd scrub max interval . Type Float Default 0.5 . mon_warn_not_scrubbed Description Number of seconds after osd_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_scrub_chunk_min Description The object store is partitioned into chunks which end on hash boundaries. For chunky scrubs, Ceph scrubs objects one chunk at a time with writes blocked for that chunk. The osd scrub chunk min setting represents the minimum number of chunks to scrub. Type 32-bit Integer Default 5 osd_scrub_chunk_max Description The maximum number of chunks to scrub. Type 32-bit Integer Default 25 osd_scrub_sleep Description The time to sleep between deep scrub operations. Type Float Default 0 (or off). osd_scrub_during_recovery Description Allows scrubbing during recovery. Type Bool Default false osd_scrub_invalid_stats Description Forces extra scrub to fix stats marked as invalid. Type Bool Default true osd_scrub_priority Description Controls queue priority of scrub operations versus client I/O. Type Unsigned 32-bit Integer Default 5 osd_requested_scrub_priority Description The priority set for user requested scrub on the work queue. If this value were to be smaller than osd_client_op_priority , it can be boosted to the value of osd_client_op_priority when scrub is blocking client operations. Type Unsigned 32-bit Integer Default 120 osd_scrub_cost Description Cost of scrub operations in megabytes for queue scheduling purposes. Type Unsigned 32-bit Integer Default 52428800 osd_deep_scrub_interval Description The interval for deep scrubbing, that is fully reading all data. The osd scrub load threshold parameter does not affect this setting. Type Float Default Once per week. 60*60*24*7 osd_deep_scrub_stride Description Read size when doing a deep scrub. Type 32-bit Integer Default 512 KB. 524288 mon_warn_not_deep_scrubbed Description Number of seconds after osd_deep_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning) osd_deep_scrub_randomize_ratio Description The rate at which scrubs will randomly become deep scrubs (even before osd_deep_scrub_interval has passed). Type Float Default 0.15 or 15% osd_deep_scrub_update_digest_min_age Description How many seconds old objects must be before scrub updates the whole-object digest. Type Integer Default 7200 (120 hours) osd_deep_scrub_large_omap_object_key_threshold Description Warning when you encounter an object with more OMAP keys than this. Type Integer Default 200000 osd_deep_scrub_large_omap_object_value_sum_threshold Description Warning when you encounter an object with more OMAP key bytes than this. Type Integer Default 1 G osd_delete_sleep Description Time in seconds to sleep before the removal transaction. This throttles the placement group deletion process. Type Float Default 0.0 osd_delete_sleep_hdd Description Time in seconds to sleep before the removal transaction for HDDs. Type Float Default 5.0 osd_delete_sleep_ssd Description Time in seconds to sleep before the removal transaction for SSDs. Type Float Default 1.0 osd_delete_sleep_hybrid Description Time in seconds to sleep before the removal transaction when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 1.0 osd_op_num_shards Description The number of shards for client operations. Type 32-bit Integer Default 0 osd_op_num_threads_per_shard Description The number of threads per shard for client operations. Type 32-bit Integer Default 0 osd_op_num_shards_hdd Description The number of shards for HDD operations. Type 32-bit Integer Default 5 osd_op_num_threads_per_shard_hdd Description The number of threads per shard for HDD operations. Type 32-bit Integer Default 1 osd_op_num_shards_ssd Description The number of shards for SSD operations. Type 32-bit Integer Default 8 osd_op_num_threads_per_shard_ssd Description The number of threads per shard for SSD operations. Type 32-bit Integer Default 2 osd_op_queue Description Sets the type of queue to be used for operation prioritizing within Ceph OSDs. Requires a restart of the OSD daemons. Type String Default wpq Valid choices wpq , mclock_scheduler , debug_random Important The mClock OSD scheduler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. osd_op_queue_cut_off Description Selects which priority operations are sent to the strict queue and which are sent to the normal queue. Requires a restart of the OSD daemons. The low setting sends all replication and higher operations to the strict queue, while the high option sends only replication acknowledgment operations and higher to the strict queue. The high setting helps when some Ceph OSDs in the cluster are very busy, especially when combined with the wpq option in the osd_op_queue setting. Ceph OSDs that are very busy handling replication traffic can deplete primary client traffic on these OSDs without these settings. Type String Default high Valid choices low , high , debug_random osd_client_op_priority Description The priority set for client operations. It is relative to osd recovery op priority . Type 32-bit Integer Default 63 Valid Range 1-63 osd_recovery_op_priority Description The priority set for recovery operations. It is relative to osd client op priority . Type 32-bit Integer Default 3 Valid Range 1-63 osd_op_thread_timeout Description The Ceph OSD operation thread timeout in seconds. Type 32-bit Integer Default 15 osd_op_complaint_time Description An operation becomes complaint worthy after the specified number of seconds have elapsed. Type Float Default 30 osd_disk_threads Description The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming. Type 32-bit Integer Default 1 osd_op_history_size Description The maximum number of completed operations to track. Type 32-bit Unsigned Integer Default 20 osd_op_history_duration Description The oldest completed operation to track. Type 32-bit Unsigned Integer Default 600 osd_op_log_threshold Description How many operations logs to display at once. Type 32-bit Integer Default 5 osd_op_timeout Description The time in seconds after which running OSD operations time out. Type Integer Default 0 Important Do not set the osd op timeout option unless your clients can handle the consequences. For example, setting this parameter on clients running in virtual machines can lead to data corruption because the virtual machines interpret this timeout as a hardware failure. osd_max_backfills Description The maximum number of backfill operations allowed to or from a single OSD. Type 64-bit Unsigned Integer Default 1 osd_backfill_scan_min Description The minimum number of objects per backfill scan. Type 32-bit Integer Default 64 osd_backfill_scan_max Description The maximum number of objects per backfill scan. Type 32-bit Integer Default 512 osd_backfill_full_ratio Description Refuse to accept backfill requests when the Ceph OSD's full ratio is above this value. Type Float Default 0.85 osd_backfill_retry_interval Description The number of seconds to wait before retrying backfill requests. Type Double Default 30.000000 osd_map_dedup Description Enable removing duplicates in the OSD map. Type Boolean Default true osd_map_cache_size Description The size of the OSD map cache in megabytes. Type 32-bit Integer Default 50 osd_map_cache_bl_size Description The size of the in-memory OSD map cache in OSD daemons. Type 32-bit Integer Default 50 osd_map_cache_bl_inc_size Description The size of the in-memory OSD map cache incrementals in OSD daemons. Type 32-bit Integer Default 100 osd_map_message_max Description The maximum map entries allowed per MOSDMap message. Type 32-bit Integer Default 40 osd_snap_trim_thread_timeout Description The maximum time in seconds before timing out a snap trim thread. Type 32-bit Integer Default 60*60*1 osd_pg_max_concurrent_snap_trims Description The max number of parallel snap trims/PG. This controls how many objects per PG to trim at once. Type 32-bit Integer Default 2 osd_snap_trim_sleep Description Insert a sleep between every trim operation a PG issues. Type 32-bit Integer Default 0 osd_snap_trim_sleep_hdd Description Time in seconds to sleep before the snapshot trimming for HDDs. Type Float Default 5.0 osd_snap_trim_sleep_ssd Description Time in seconds to sleep before the snapshot trimming operation for SSD OSDs, including NVMe. Type Float Default 0.0 osd_snap_trim_sleep_hybrid Description Time in seconds to sleep before the snapshot trimming operation when OSD data is on an HDD and the OSD journal or WAL and DB is on an SSD. Type Float Default 2.0 osd_max_trimming_pgs Description The max number of trimming PGs Type 32-bit Integer Default 2 osd_backlog_thread_timeout Description The maximum time in seconds before timing out a backlog thread. Type 32-bit Integer Default 60*60*1 osd_default_notify_timeout Description The OSD default notification timeout (in seconds). Type 32-bit Integer Unsigned Default 30 osd_check_for_log_corruption Description Check log files for corruption. Can be computationally expensive. Type Boolean Default false osd_remove_thread_timeout Description The maximum time in seconds before timing out a remove OSD thread. Type 32-bit Integer Default 60*60 osd_command_thread_timeout Description The maximum time in seconds before timing out a command thread. Type 32-bit Integer Default 10*60 osd_command_max_records Description Limits the number of lost objects to return. Type 32-bit Integer Default 256 osd_auto_upgrade_tmap Description Uses tmap for omap on old objects. Type Boolean Default true osd_tmapput_sets_users_tmap Description Uses tmap for debugging only. Type Boolean Default false osd_preserve_trimmed_log Description Preserves trimmed log files, but uses more disk space. Type Boolean Default false osd_recovery_delay_start Description After peering completes, Ceph delays for the specified number of seconds before starting to recover objects. Type Float Default 0 osd_recovery_max_active Description The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster. Type 32-bit Integer Default 0 osd_recovery_max_active_hdd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is HDD. Type Integer Default 3 osd_recovery_max_active_ssd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is SSD. Type Integer Default 10 osd_recovery_sleep Description Time in seconds to sleep before the recovery or backfill operation. Increasing this value slows down recovery operation while client operations are less impacted. Type Float Default 0.0 osd_recovery_sleep_hdd Description Time in seconds to sleep before the recovery or backfill operation for HDDs. Type Float Default 0.1 osd_recovery_sleep_ssd Description Time in seconds to sleep before the recovery or backfill operation for SSDs. Type Float Default 0.0 osd_recovery_sleep_hybrid Description Time in seconds to sleep before the recovery or backfill operation when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 0.025 osd_recovery_max_chunk Description The maximum size of a recovered chunk of data to push. Type 64-bit Integer Unsigned Default 8388608 osd_recovery_threads Description The number of threads for recovering data. Type 32-bit Integer Default 1 osd_recovery_thread_timeout Description The maximum time in seconds before timing out a recovery thread. Type 32-bit Integer Default 30 osd_recover_clone_overlap Description Preserves clone overlap during recovery. Should always be set to true . Type Boolean Default true rados_osd_op_timeout Description Number of seconds that RADOS waits for a response from the OSD before returning an error from a RADOS operation. A value of 0 means no limit. Type Double Default 0
[ "IMPORTANT: Red Hat does not recommend changing the default." ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/osd-object-storage-daemon-configuration-options_conf
Release Notes for .NET 9.0 RPM packages
Release Notes for .NET 9.0 RPM packages .NET 9.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/net/9.0/html-single/release_notes_for_.net_9.0_rpm_packages/index
6.14. Documentation
6.14. Documentation release-notes component The Release Notes document included in Red Hat Enterprise Linux 6.5 and available on the Customer Portal contains incorrectly lists information about the FSTEK certification in all languages. Please consult the online English version of the Release Notes , which is the latest and most up-to-date version. release-notes component The Benagali (bn-IN) and Simplified Chinese (zh-CN) translations of the Release Notes included in Red Hat Enterprise Linux 6.5 and on the Customer Portal contain several untranslated strings.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/documentation-issues
9.3. External Provider Networks
9.3. External Provider Networks 9.3.1. Importing Networks From External Providers To use networks from an external network provider (OpenStack Networking or any third-party provider that implements the OpenStack Neutron REST API), register the provider with the Manager. See Adding an OpenStack Network Service Neutron for Network Provisioning or Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Manager so the networks can be used by virtual machines. Importing a Network From an External Provider Click Network Networks . Click Import . From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list. You can customize the name of the network that you are importing. To customize the name, click the network's name in the Name column, and change the text. From the Data Center drop-down list, select the data center into which the networks will be imported. Optional: Clear the Allow All check box to prevent that network from being available to all users. Click Import . The selected networks are imported into the target data center and can be attached to virtual machines. See Adding a New Network Interface in the Virtual Machine Management Guide for more information. 9.3.2. Limitations to Using External Provider Networks The following limitations apply to using logical networks imported from an external provider in a Red Hat Virtualization environment. Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks. The same logical network can be imported more than once, but only to different data centers. You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network. Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers. If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine. Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported. 9.3.3. Configuring Subnets on External Provider Logical Networks A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses. While the Red Hat Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager. If you add Open Virtual Network (OVN) (ovirt-provider-ovn) as an external network provider, multiple subnets can be connected to each other by routers. To manage these routers, you can use the OpenStack Networking API v2.0 . Please note, however, that ovirt-provider-ovn has a limitation: Source NAT (enable_snat in the OpenStack API) is not implemented. 9.3.4. Adding Subnets to External Provider Logical Networks Create a subnet on a logical network provided by an external provider. Adding Subnets to External Provider Logical Networks Click Network Networks . Click the logical network's name to open the details view. Click the Subnets tab. Click New . Enter a Name and CIDR for the new subnet. From the IP Version drop-down list, select either IPv4 or IPv6 . Click OK . Note For IPv6, Red Hat Virtualization supports only static addressing. 9.3.5. Removing Subnets from External Provider Logical Networks Remove a subnet from a logical network provided by an external provider. Removing Subnets from External Provider Logical Networks Click Network Networks . Click the logical network's name to open the details view. Click the Subnets tab. Select a subnet and click Remove . Click OK . 9.3.6. Assigning Security Groups to Logical Networks and Ports Note This feature is only available when Open Virtual Network (OVN) is added as an external network provider (as ovirt-provider-ovn). Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking API v2.0 or Ansible. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network. You can also use security groups to filter traffic at the port level. In Red Hat Virtualization 4.2.7, security groups are disabled by default. Assigning Security Groups to Logical Networks Click Compute Clusters . Click the cluster name to open the details view. Click the Logical Networks tab. Click Add Network and define the properties, ensuring that you select ovirt-provider-ovn from the External Providers drop-down list. For more information, see Section 9.1.2, "Creating a New Logical Network in a Data Center or Cluster" . Select Enabled from the Security Group drop-down list. For more details see Section 9.1.7, "Logical Network General Settings Explained" . Click OK . Create security groups using either OpenStack Networking API v2.0 or Ansible . Create security group rules using either OpenStack Networking API v2.0 or Ansible . Update the ports with the security groups that you defined using either OpenStack Networking API v2.0 or Ansible . Optional. Define whether the security feature is enabled at the port level. Currently, this is only possible using the OpenStack Networking API . If the port_security_enabled attribute is not set, it will default to the value specified in the network to which it belongs.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-External_Provider_Networks
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services and all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . 1.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy:
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/preparing_to_deploy_openshift_data_foundation
8.2. OData Version 4.0 Support
8.2. OData Version 4.0 Support Red Hat JBoss Data Virtualization strives to be compliant with the OData specification. For example, if you have deployed a VDB named northwind that has a customers table in a NW model, then you can access that table with an HTTP GET via this URL: http://localhost:8080/odata4/northwind/NW/customers. This is akin to making a JDBC/ODBC connection and issuing this SQL: Note Use correct case (upper or lower) in the resource path. Unlike SQL, the names used in the URI as case-sensitive. Note The returned results from the OData query are output in either Atom/AtomPub XML or JSON format. JSON results are returned by default. You can submit predicates with your query to filter the results: http://localhost:8080/odata4/northwind/NW/customers?USDfilter=name eq 'bob' Note The spaces around 'eq' are for readability of the example only; in real URLs they must be percent-encoded as %20. OData mandates percent encoding for all spaces in URLs. http://docs.oasis-open.org/odata/odata/v4.0/odata-v4.0-part2-url-conventions.html This is similar to making a JDBC/ODBC connection and issuing the SQL To request the result to be formatted in a specific format, add the query option USDformat like this: http://localhost:8080/odata4/northwind/NW/customers?USDformat=JSON Query options can be combined as needed. For example here is how you format with a filter: OData allows for querying navigations from one entity to another. A navigation is similar to the foreign key relationships in relational databases. For example, if the customers table has an exported key to the orders table on the customers primary key called the customer_fk, then an OData GET could be issued like this: http://localhost:8080/odata4/northwind/NW/customers(1234)/customer_fk?USDfilter=orderdate gt 2012-12-31T21:23:38Z This would be akin to making a JDBC/ODBC connection and issuing this SQL: Note For detailed protocol access you can read the specification at http://odata.org . You can also read this very useful web resource for an example of accessing an OData server. Important If you are not seeing all the rows, see the configuration section below for more details. Generally batching is being utilized, which tooling should understand automatically, and additional queries with a USDskiptoken query option specified are needed: http://localhost:8080/odata4/northwind/NW/customers?USDskiptoken=xxx Important Sometimes you may encounter an "EntitySet Not Found" error. This happens when you issue the above query and you see a message like this: This message means that either you supplied an incorrect model-name/table-name combination. Check that the spelling and case are correct. It is possible that the entity is not part of the metadata, such as when a table does not have any PRIMARY KEY or UNIQUE KEY(s). It is possible to update your data. Using the OData protocol it is possible to perform CREATE/UPDATE/DELETE operations along with the READ operations shown above. These operations use different HTTP methods. INSERT/CREATE is accomplished through the "POST" HTTP method. Here is an example: An UPDATE is performed with an HTTP "PUT": The DELETE operation uses the HTTP "DELETE" method. 8.2.1. Security By default OData access is secured using HTTPBasic authentication. The user will be authenticated against Red Hat JBoss Data Virtualization's default security domain "teiid-security". Important Users are expected to have the odata role. Be sure to create user with this role when you are using add-user.sh script to create a new user. However, if you wish to change the security domain use a deployment-overlay to override the web.xml file in the odata4 file in the EAP_HOME/modules/org/jboss/teiid/main/deployments directory. The OData WAR can also support Kerberos, SAML and OAuth2 authentications. To learn about these, please see the Security Guide. 8.2.2. Configuration The OData WAR file can be configured by configuring the following properties in the web.xml file: Table 8.3. Configuring OData 4 Property Name Description Default Value batch-size Number of rows to send back each time, -1 returns all rows 256 skiptoken-cache-time Time interval between the results being recycled/expired between USDskiptoken requests 300000 local-transport-name Data Virtualization Local transport name for connection odata invalid-xml10-character-replacement Replacement string if an invalid XML 1.0 character appears in the data - note that this replacement will occur even if JSON is requested. No value (the default) means that an exception will be thrown with XML results if such a character is encountered. NA proxy-base-uri Defines the proxy server's URI to be used in OData responses. NA connection.XXX Sets XXX as an execution property on the local connection. Can be used for example to enable result set cache mode. NA Note If the Data Virtualization server is configured behind a proxy server or deployed in cloud environment, or using a load-balancer then the URI of the server which is handling the OData request is different from URI of proxy. To generate valid links in the OData responses, configure "proxy-base-uri" property in the web.xml. If this value is available as system property then define the property value as per the below. To modify the web.xml file, create a deployment-overlay using the CLI with the modified contents: The Red Hat JBoss Data Virtualization OData server implements cursoring logic when the result rows exceed the configured batch size. On every request, only batch-size number of rows are returned. Each such request is considered an active cursor, with a specified amount of idle time specified by skip-token-cache-time. After the cursor is timed out, the cursor will be closed and remaining results will be cleaned up, and will no longer be available for further queries. Since there is no session based tracking of these cursors, if the request for skiptoken comes after the expired time, the original query will be executed again and tries to re-position the cursor to relative absolute potion, however the results are not guaranteed to be same as the underlying sources may have been updated with new information meanwhile. Important The following feature limitations apply: Delta processing is not supported. The data-aggregation extension to the specification is not supported. 8.2.3. Client Tools for Access There are different tools you can use. Depending upon your programming model and needs there are various ways you write your access layer into OData. Here are some suggestions: Your Browser: The OData Explorer is an online tool for browsing an OData data service. Olingo: Is a Java framework that supports OData V4, has both consumer and producer framework. Microsoft has various .Net based libraries. See http://odata.github.io/ Windows Desktop: LINQPad is a wonderful tool for building OData queries interactively. See https://www.linqpad.net/ Shell Scripts: use the CURL tool 8.2.4. OData Metadata OData defines its schema using Conceptual Schema Definition Language (CSDL). Every VDB,that is deployed in an ACTIVE state on the Data Virtualzation server exposes its metadata in CSDL format. For example if you want retrieve metadata for your vdb northwind, you need to issue a query like this: http://localhost:8080/odata4/northwind/NW/USDmetadata Since OData schema model is not a relational schema model, Red Hat JBoss Data Virtualization uses the following semantics to map its relational schema model to OData schema model. Table 8.4. Mapping OData to Data Virtualization Relational Entity Mapped OData Entity Table/View EntityType, EntitySet Table Columns EntityType's Properties Primary Key EntityType's Key Properties Foreign Key Navigation Property on EntityType Procedure FunctionImport, Action Import Procedure's Table Return ComplexType By design, Red Hat JBoss Data Virtualization does not define any "embedded" ComplexType in the EntityType. Since OData access is more key based, it is mandatory that every table Red Hat JBoss Data Virtualization exposes through OData has a primary key or at least one unique key. A table which does not have either of these will be dropped out of the USDmetadata.
[ "SELECT * FROM NW.customers", "SELECT * FROM NW.customers where name = 'bob'", "http://localhost:8080/odata4/northwind/NW/customers?USDfilter=name eq 'bob'&USDformat=xml", "SELECT o.* FROM NW.orders o join NW.customers c on o.customer_id = c.id where c.id=1234 and o.orderdate > {ts '2012-12-31 21:23:38'}", "{\"error\":{\"code\":null,\"message\":\"Cannot find EntitySet, Singleton, ActionImport or FunctionImport with name 'xxx'.\"}}", "POST /service.svc/Customers HTTP/1.1 Host: host Content-Type: application/json Accept: application/json { \"CustomerID\": \"AS123X\", \"CompanyName\": \"Contoso Widgets\", \"Address\" : { \"Street\": \"58 Contoso St\", \"City\": \"Seattle\" } }", "PUT /service.svc/Customers('ALFKI') HTTP/1.1 Host: host Content-Type: application/josn Accept: application/json { \"CustomerID\": \"AS123X\", \"CompanyName\": \"Updated Company Name\", \"Address\" : { \"Street\": \"Updated Street\" } }", "DELETE /service.svc/Customers('ALFKI') HTTP/1.1 Host: host Content-Type: application/json Accept: application/json", "<init-param> <param-name>proxy-base-uri</param-name> <param-value>USD{system-property-name}</param-value> </init-param>", "deployment-overlay add --name=myOverlay --content=/WEB-INF/web.xml=/modified/web.xml --deployments=teiid-odata-odata4.war --redeploy-affected" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/ch08s02
Chapter 5. Developing Operators
Chapter 5. Developing Operators 5.1. About the Operator SDK The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators , in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run. Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication. The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. Why use the Operator SDK? The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring. The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features: High-level APIs and abstractions to write the operational logic more intuitively Tools for scaffolding and code generation to quickly bootstrap a new project Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster Extensions to cover common Operator use cases Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.13 supports Operator SDK 1.28.0. 5.1.1. What are Operators? For an overview about basic Operator concepts and terminology, see Understanding Operators . 5.1.2. Development workflow The Operator SDK provides the following workflow to develop a new Operator: Create an Operator project by using the Operator SDK command-line interface (CLI). Define new resource APIs by adding custom resource definitions (CRDs). Specify resources to watch by using the Operator SDK API. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources. Use the Operator SDK CLI to build and generate the Operator deployment manifests. Figure 5.1. Operator SDK workflow At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application. 5.1.3. Additional resources Certified Operator Build Guide 5.2. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.13 supports Operator SDK 1.28.0. 5.2.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.13 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.28.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.28.0-ocp", ... 5.2.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.13 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.28.0-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.28.0-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.28.0-ocp", ... 5.3. Go-based Operators 5.3.1. Getting started with Operator SDK for Go-based Operators To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster. 5.3.1.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) v4.13+ installed Go v1.19+ Logged into an OpenShift Container Platform 4.13 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.1.2. Creating and deploying Go-based Operators You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK. Procedure Create a project. Create your project directory: USD mkdir memcached-operator Change into the project directory: USD cd memcached-operator Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator The command uses the Go plugin by default. Create an API. Create a simple Memcached API: USD operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind Memcached Build and push the Operator image. Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to: USD make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag> Run the Operator. Install the CRD: USD make install Deploy the project to the cluster. Set IMG to the image that you pushed: USD make deploy IMG=<registry>/<user>/<image_name>:<tag> Create a sample custom resource (CR). Create a sample CR: USD oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system Watch for the CR to reconcile the Operator: USD oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system Delete a CR. Delete a CR by running the following command: USD oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system Clean up. Run the following command to clean up the resources that have been created as part of this procedure: USD make undeploy 5.3.1.3. steps See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator. 5.3.2. Operator SDK tutorial for Go-based Operators Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This process is accomplished using two centerpieces of the Operator Framework: Operator SDK The operator-sdk CLI tool and controller-runtime library API Operator Lifecycle Manager (OLM) Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster Note This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators . 5.3.2.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) v4.13+ installed Go v1.19+ Logged into an OpenShift Container Platform 4.13 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.2.2. Creating a project Use the Operator SDK CLI to create a project called memcached-operator . Procedure Create a directory for the project: USD mkdir -p USDHOME/projects/memcached-operator Change to the directory: USD cd USDHOME/projects/memcached-operator Activate support for Go modules: USD export GO111MODULE=on Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator Note The operator-sdk init command uses the Go plugin by default. The operator-sdk init command generates a go.mod file to be used with Go modules . The --repo flag is required when creating a project outside of USDGOPATH/src/ , because generated files require a valid module path. 5.3.2.2.1. PROJECT file Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example: domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} 5.3.2.2.2. About the Manager The main program for the Operator is the main.go file, which initializes and runs the Manager . The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks. The Manager can restrict the namespace that all controllers watch for resources: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace}) By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""}) You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces: var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), }) 1 List of namespaces. 2 Creates a Cmd struct to provide shared dependencies and start components. 5.3.2.2.3. About multi-group APIs Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command: USD operator-sdk edit --multigroup=true This command updates the PROJECT file, which should look like the following example: domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ... For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly. Additional resource For more details on migrating to a multi-group project, see the Kubebuilder documentation . 5.3.2.3. Creating an API and controller Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. Procedure Run the following command to create an API with group cache , version, v1 , and kind Memcached : USD operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached When prompted, enter y for creating both the resource and controller: Create Resource [y/n] y Create Controller [y/n] y Example output Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ... This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go . 5.3.2.3.1. Defining the API Define the API for the Memcached custom resource (CR). Procedure Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status : // MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` } Update the generated code for the resource type: USD make generate Tip After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type. The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement. 5.3.2.3.2. Generating CRD manifests After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests. Procedure Run the following command to generate and update CRD manifests: USD make manifests This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file. 5.3.2.3.2.1. About OpenAPI validation OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated. Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix. Additional resources For more details on the usage of markers in API code, see the following Kubebuilder documentation: CRD generation Markers List of OpenAPIv3 validation markers For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation . 5.3.2.4. Implementing the controller After creating a new API and controller, you can implement the controller logic. Procedure For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation: Example 5.1. Example memcached_controller.go /*
[ "tar xvf operator-sdk-v1.28.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.28.0-ocp\",", "tar xvf operator-sdk-v1.28.0-ocp-darwin-x86_64.tar.gz", "tar xvf operator-sdk-v1.28.0-ocp-darwin-aarch64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.28.0-ocp\",", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "export GO111MODULE=on", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})", "var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })", "operator-sdk edit --multigroup=true", "domain: example.com layout: go.kubebuilder.io/v3 multigroup: true", "operator-sdk create api --group=cache --version=v1 --kind=Memcached", "Create Resource [y/n] y Create Controller [y/n] y", "Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go", "// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }", "import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }", "// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil", "import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil", "// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }", "import ( \"github.com/operator-framework/operator-lib/proxy\" )", "for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "k8s.io/api v0.26.2 k8s.io/apiextensions-apiserver v0.26.2 k8s.io/apimachinery v0.26.2 k8s.io/cli-runtime v0.26.2 k8s.io/client-go v0.26.2 k8s.io/kubectl v0.26.2 sigs.k8s.io/controller-runtime v0.14.5 sigs.k8s.io/controller-tools v0.11.3 sigs.k8s.io/kubebuilder/v3 v3.9.1", "go mod tidy", "- build: generate fmt vet ## Build manager binary. + build: manifests generate fmt vet ## Build manager binary.", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211", "--- defaults file for Memcached size: 1", "apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3", "env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.13 1", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", ".PHONY: run ANSIBLE_ROLES_PATH?=\"USD(shell pwd)/roles\" run: ansible-operator ## Run against the configured Kubernetes cluster in ~/.kube/config USD(ANSIBLE_OPERATOR) run", "- name: kubernetes.core version: \"2.3.1\"", "- name: kubernetes.core version: \"2.4.0\"", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false", "- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False", "apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"", "{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }", "--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"", "sudo dnf install ansible", "pip3 install openshift", "ansible-galaxy collection install community.kubernetes", "ansible-galaxy collection install -r requirements.yml", "--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2", "--- state: present", "--- - hosts: localhost roles: - <kind>", "ansible-playbook playbook.yml", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "NAME DATA AGE example-config 0 2m1s", "ansible-playbook playbook.yml --extra-vars state=absent", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "make install", "/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "make run", "/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"", "oc apply -f config/samples/<gvk>.yaml", "oc get configmaps", "NAME STATUS AGE example-config Active 3s", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent", "oc apply -f config/samples/<gvk>.yaml", "oc get configmap", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2", "{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}", "containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"", "apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4", "status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running", "- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false", "- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data", "collections: - operator_sdk.util", "k8s_status: status: key1: value1", "mkdir nginx-operator", "cd nginx-operator", "operator-sdk init --plugins=helm", "operator-sdk create api --group demo --version v1 --kind Nginx", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system", "oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "make undeploy", "mkdir -p USDHOME/projects/nginx-operator", "cd USDHOME/projects/nginx-operator", "operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx", "operator-sdk init --plugins helm --help", "domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"", "Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080", "- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY", "proxy: http: \"\" https: \"\" no_proxy: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project nginx-operator-system", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get nginx/nginx-sample -o yaml", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7", "oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m", "oc delete -f config/samples/demo_v1_nginx.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "FROM registry.redhat.io/openshift4/ose-helm-operator:v4.13 1", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2", "{{ .Values.replicaCount }}", "oc get Tomcats --all-namespaces", "mkdir -p USDHOME/github.com/example/memcached-operator", "cd USDHOME/github.com/example/memcached-operator", "operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help", "Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch", "// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }", "operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3", "Create Resource [y/n] y Create Controller [y/n] y", "// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )", "// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }", "--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch", "make install run", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc project <project_name>-system", "apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m", "apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2", "oc apply -f config/samples/cache_v1_memcachedbackup.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m", "oc delete -f config/samples/cache_v1_memcached.yaml", "oc delete -f config/samples/cache_v1_memcachedbackup.yaml", "make undeploy", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"", "operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4", "tree", ". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files", "public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }", "import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }", "@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}", "mvn clean install", "cat target/kubernetes/memcacheds.cache.example.com-v1.yaml", "Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1", "<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>", "package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }", "Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();", "if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }", "int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();", "if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }", "List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());", "if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }", "private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }", "private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }", "mvn clean install", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "oc apply -f rbac.yaml", "java -jar target/quarkus-app/quarkus-run.jar", "kubectl apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f rbac.yaml", "oc get all -n default", "NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s", "oc apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'", "operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'", "operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]' operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2", "// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{", "spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211", "- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2", "relatedImage: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3", "BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2", "make bundle USE_IMAGE_DIGESTS=true", "metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'", "labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2", "labels: operatorframework.io/os.linux: supported", "labels: operatorframework.io/arch.amd64: supported", "labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2", "metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1", "metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }", "module github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )", "import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5", "- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.", "required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.", "versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true", "customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster", "versions: - name: v1alpha1 served: false 1 storage: true", "versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2", "versions: - name: v1beta1 served: true storage: true", "metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>", "IMAGE_TAG_BASE=quay.io/example/my-operator", "make bundle-build bundle-push catalog-build catalog-push", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m", "oc get catalogsource", "NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1", "oc get og", "NAME AGE my-test 4h40m", "oc get csv", "NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m", "operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1", "INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"", "operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2", "INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"", "operator-sdk cleanup memcached-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1", "com.redhat.openshift.versions: \"v4.7-v4.9\" 1", "LABEL com.redhat.openshift.versions=\"<versions>\" 1", "spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"", "install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default", "spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.", "operator-sdk scorecard <bundle_dir_or_image> [flags]", "operator-sdk scorecard -h", "./bundle └── tests └── scorecard └── config.yaml", "kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.28.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.28.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test", "make bundle", "operator-sdk scorecard <bundle_dir_or_image>", "{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.28.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }", "-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.28.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'", "apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.28.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.28.0 labels: suite: olm test: olm-bundle-validation-test", "// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }", "operator-sdk bundle validate <bundle_dir_or_image> <flags>", "./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml", "INFO[0000] All validation tests have completed successfully", "ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV", "WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully", "operator-sdk bundle validate -h", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "operator-sdk bundle validate ./bundle", "operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description", "// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)", "operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)", "../prometheus", "package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }", "func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "oc apply -f config/prometheus/role.yaml", "oc apply -f config/prometheus/rolebinding.yaml", "oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"", "operator-sdk init --plugins=ansible --domain=testmetrics.com", "operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role", "--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1", "oc create -f config/samples/metrics_v1_testmetrics.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m", "oc get ep", "NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m", "token=`oc create token prometheus-k8s -n openshift-monitoring`", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter", "HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge", "HELP my_gauge_metric Create my gauge and set it to 2.", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe", "HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary", "import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }", "import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }", "cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }", "err := cfg.Execute(ctx)", "packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml", "bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml", "operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3", "operator-sdk run bundle <bundle_image_name>:<tag>", "INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operators/developing-operators
3.2. Configuring an LVM Volume with an ext4 File System
3.2. Configuring an LVM Volume with an ext4 File System This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster. The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume. In this example, the shared partition /dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. Since the /dev/sdb1 partition is storage that is shared, you perform this procedure on one node only, Create an LVM physical volume on partition /dev/sdb1 . Create the volume group my_vg that consists of the physical volume /dev/sdb1 . Create a logical volume using the volume group my_vg . You can use the lvs command to display the logical volume. Create an ext4 file system on the logical volume my_lv .
[ "pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created", "vgcreate my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created", "lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m", "mkfs.ext4 /dev/my_vg/my_lv mke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-lvmsetupnfs-haaa
5.2.11. /proc/interrupts
5.2.11. /proc/interrupts This file records the number of interrupts per IRQ on the x86 architecture. A standard /proc/interrupts looks similar to the following: For a multi-processor machine, this file may look slightly different: The first column refers to the IRQ number. Each CPU in the system has its own column and its own number of interrupts per IRQ. The column reports the type of interrupt, and the last column contains the name of the device that is located at that IRQ. Each of the types of interrupts seen in this file, which are architecture-specific, mean something different. For x86 machines, the following values are common: XT-PIC - This is the old AT computer interrupts. IO-APIC-edge - The voltage signal on this interrupt transitions from low to high, creating an edge , where the interrupt occurs and is only signaled once. This kind of interrupt, as well as the IO-APIC-level interrupt, are only seen on systems with processors from the 586 family and higher. IO-APIC-level - Generates interrupts when its voltage signal is high until the signal is low again.
[ "CPU0 0: 80448940 XT-PIC timer 1: 174412 XT-PIC keyboard 2: 0 XT-PIC cascade 8: 1 XT-PIC rtc 10: 410964 XT-PIC eth0 12: 60330 XT-PIC PS/2 Mouse 14: 1314121 XT-PIC ide0 15: 5195422 XT-PIC ide1 NMI: 0 ERR: 0", "CPU0 CPU1 0: 1366814704 0 XT-PIC timer 1: 128 340 IO-APIC-edge keyboard 2: 0 0 XT-PIC cascade 8: 0 1 IO-APIC-edge rtc 12: 5323 5793 IO-APIC-edge PS/2 Mouse 13: 1 0 XT-PIC fpu 16: 11184294 15940594 IO-APIC-level Intel EtherExpress Pro 10/100 Ethernet 20: 8450043 11120093 IO-APIC-level megaraid 30: 10432 10722 IO-APIC-level aic7xxx 31: 23 22 IO-APIC-level aic7xxx NMI: 0 ERR: 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-interrupts
Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE
Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 8.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 8.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z
3.8 Release Notes
3.8 Release Notes Red Hat Software Collections 3 Release Notes for Red Hat Software Collections 3.8 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services Eliska Slobodova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/index
Authentication and authorization
Authentication and authorization OpenShift Container Platform 4.10 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
[ "oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1", "oc apply -f </path/to/file.yaml>", "oc describe oauth.config.openshift.io/cluster", "Spec: Token Config: Access Token Max Age Seconds: 172800", "oc edit oauth cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1", "oc get clusteroperators authentication", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.10.0 True False False 145m", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.10.0 True False False 145m", "error: You must be logged in to the server (Unauthorized)", "oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }", "oc get events | grep ServiceAccount", "1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "oc describe sa/proxy | grep -A5 Events", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens", "oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host", "oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')", "oc edit oauthclient <oauth_client> 1", "apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1", "oc get useroauthaccesstokens", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc get useroauthaccesstokens --field-selector=clientName=\"console\"", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc describe useroauthaccesstokens <token_name>", "Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>", "oc delete useroauthaccesstokens <token_name>", "useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted", "oc delete secrets kubeadmin -n kube-system", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>", "htpasswd -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>", "> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>", "> htpasswd.exe -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>", "oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd", "htpasswd -bB users.htpasswd <username> <password>", "Adding password for user <username>", "htpasswd -D users.htpasswd <username>", "Deleting password for user <username>", "oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "oc delete user <username>", "user.user.openshift.io \"<username>\" deleted", "oc delete identity my_htpasswd_provider:<username>", "identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "ldap://host:port/basedn?attribute?scope?filter", "(&(<filter>)(<attribute>=<username>))", "ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)", "oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "{\"error\":\"Error message\"}", "{\"sub\":\"userid\"} 1", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0", "curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp", "{\"sub\":\"userid\"}", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User", "identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User", "curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request", "curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request", "curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token", "curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>", "kdestroy -c cache_name 1", "oc login -u <username>", "oc logout", "kinit", "oc login", "https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc login -u <identity_provider_username> --server=<api_server_url_and_port>", "oc whoami", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers", "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44", "oc describe secret <secret_name>", "oc describe secret robot-token-uzkbh -n top-secret", "Name: robot-token-uzkbh Labels: <none> Annotations: kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe Type: kubernetes.io/service-account-token Data token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9", "oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9", "Logged into \"https://server:8443\" as \"system:serviceaccount:top-secret:robot\" using the token provided. You don't have any projects. You can try to create a new project, by running USD oc new-project <projectname>", "oc whoami", "system:serviceaccount:top-secret:robot", "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "oc edit authentications cluster", "spec: serviceAccountIssuer: https://test.default.svc 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4", "oc create -f pod-projected-svc-token.yaml", "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: 5 - KILL - MKNOD - SETUID - SETGID runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc_admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: system:authenticated 2 Settings: Allow Privileged: false Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SYS_CHROOT,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc delete scc <scc_name>", "oc edit scc <scc_name>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>", "url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5", "baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6", "groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4", "oc adm groups sync --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --sync-config=config.yaml --confirm", "oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc new-project ldap-sync 1", "kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync", "oc create -f ldap-sync-service-account.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - '' - user.openshift.io resources: - groups verbs: - get - list - create - update", "oc create -f ldap-sync-cluster-role.yaml", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2", "oc create -f ldap-sync-cluster-role-binding.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc create -f ldap-sync-config-map.yaml", "kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"", "oc create -f ldap-sync-cron-job.yaml", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=rfc2307_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3", "oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins", "oc adm groups sync --sync-config=active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5", "oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>", "cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r", "mycluster-2mpcn", "azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> --region=<aws_region> --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "oc adm release extract --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 --from=quay.io/<path_to>/ocp-release:<version>", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ll <path_to_ccoctl_output_dir>/manifests", "total 24 -rw-------. 1 <user> <user> 161 Apr 13 11:42 cluster-authentication-02-config.yaml -rw-------. 1 <user> <user> 379 Apr 13 11:59 openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml -rw-------. 1 <user> <user> 353 Apr 13 11:59 openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 355 Apr 13 11:59 openshift-image-registry-installer-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 339 Apr 13 11:59 openshift-ingress-operator-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 337 Apr 13 11:59 openshift-machine-api-aws-cloud-credentials-credentials.yaml", "oc adm release extract --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --from=quay.io/<path_to>/ocp-release:<version>", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system aws-creds", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode", "[default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3", "{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --quay.io/<path_to>/ocp-release:<version>", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "ccoctl gcp create-all --name=<name> --region=<gcp_region> --project=<gcp_project_id> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system gcp-credentials", "Error from server (NotFound): secrets \"gcp-credentials\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data.\"service_account.json\"' | base64 -d", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken\", 2 \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/authentication_and_authorization/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_in_external_mode/providing-feedback-on-red-hat-documentation_rhodf
Chapter 22. Monitoring the MortgageApprovalProcess process application
Chapter 22. Monitoring the MortgageApprovalProcess process application The following chapter shows how different bank employees, such as a system administrator or a knowledge worker, might use some of the monitoring capabilities to track an instance of the mortgage approval process. Prerequisites KIE Server is deployed and connected to Business Central. Procedure Log in to Red Hat Process Automation Manager and click Menu Manage Process Instances . In the Manage Process Instances window, you can set filters, such as State , Errors , Id , and so on. Select Completed in the State filter to view all completed MortgageApprovalProcess instances. Click on the completed process instance. Click each of the following tabs to get a feel for what type of information is available to monitor a specific process instance: Instance Details Process Variables Documents Logs Diagram Click Menu Track Process Reports . This view contains a variety of charts that can help a senior process manager to gain an overview of all processes based on Type , Start Date , Running Time , and so on to assist with task reporting. 22.1. Filtering process instances using default or advanced filters Business Central now provides you with default and advanced filters to help you filter and search through running process instances. You can also create custom filters using the Advanced Filters option. 22.1.1. Filtering process instances using default filters Filter process instances by attributes such as State , Errors , Filter By , Name , Start Date , and Last update . Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the filter icon on the left of the page to expand the Filters pane. This pane lists the following process attributes which you can use to filter process instances: State : Filter process instances based on their state ( Active , Aborted , Completed , Pending , and Suspended ). Errors : Filter process instances by errors. Filter By : Filter process instances based on Id , Initiator , Correlation Key , or Description attribute. Select the required attribute. Enter the search query in the text field below. Click Apply . Name : Filter process instances by definition names. Definition Id : Filter process instances by process definition IDs. Deployment Id : Filter process instances by process deployment IDs. Parent Process Instance Id : Filter process instances by parent process instance IDs. SLA Compliance : Filter process instances by SLA compliance states. Start Date : Filter process instances by creation dates. Last update : Filter process instances by last modified dates. 22.1.2. Filtering process instances using advanced filters Use the Advanced Filters option to create custom process instance filters. The newly created custom filter is added to the Saved Filters pane, which is accessible by clicking on the star icon on the left of the Manage Process Instances page. Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, on the left of the page click the Advanced Filters icon. In the Advanced Filters pane, enter the name and description of the filter, and click Add New . Select an attribute from the Select column drop-down list, for example, processName . The content of the drop-down changes to processName != value1 . Click the drop-down again and choose the required logical query. For the processName attribute, choose equals to . Change the value of the text field to the name of the process you want to filter. Note The name must match the value defined in the business process of the project. Click Save and the processes are filtered according to the filter definition. Click the star icon to open the Saved Filters pane. In the Saved Filters pane, you can view all the saved advanced filters.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/monitoring_proc
4.18. HP Moonshot iLO
4.18. HP Moonshot iLO Table 4.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_ilo_moonshot , the fence agent for HP Moonshot iLO devices. Table 4.19. HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport UDP/TCP port to use for connections with the device; the default value is 22. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-ilo_moonshot-CA
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/api_documentation/making-open-source-more-inclusive
Part IX. Decision engine in Red Hat Process Automation Manager
Part IX. Decision engine in Red Hat Process Automation Manager As a business rules developer, your understanding of the decision engine in Red Hat Process Automation Manager can help you design more effective business assets and a more scalable decision management architecture. The decision engine is the Red Hat Process Automation Manager component that stores, processes, and evaluates data to execute business rules and to reach the decisions that you define. This document describes basic concepts and functions of the decision engine to consider as you create your business rule system and decision services in Red Hat Process Automation Manager.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assembly-decision-engine
Chapter 14. Failover Deployments
Chapter 14. Failover Deployments Abstract Red Hat Fuse provides failover capability using either a simple lock file system or a JDBC locking mechanism. In both cases, a container-level lock system allows bundles to be preloaded into a secondary kernel instance in order to provide faster failover performance. 14.1. Using a Simple Lock File System Overview When you first start Red Hat Fuse a lock file is created at the root of the installation directory. You can set up a primary/secondary system whereby if the primary instance fails, the lock is passed to a secondary instance that resides on the same host machine. Configuring a lock file system To configure a lock file failover deployment, edit the etc/system.properties file on both the primary and the secondary installation to include the properties in Example 14.1, "Lock File Failover Configuration" . Example 14.1. Lock File Failover Configuration karaf.lock -specifies whether the lock file is written. karaf.lock.class -specifies the Java class implementing the lock. For a simple file lock it should always be org.apache.karaf.main.SimpleFileLock . karaf.lock.dir -specifies the directory into which the lock file is written. This must be the same for both the primary and the secondary installation. karaf.lock.delay -specifies, in milliseconds, the delay between attempts to reaquire the lock. 14.2. Using a JDBC Lock System Overview The JDBC locking mechanism is intended for failover deployments where Red Hat Fuse instances exist on separate machines. In this scenario, the primary instance holds a lock on a locking table hosted on a database. If the primary instance loses the lock, a waiting secondary process gains access to the locking table and fully starts its container. Adding the JDBC driver to the classpath In a JDBC locking system, the JDBC driver needs to be on the classpath for each instance in the primary/secondary setup. Add the JDBC driver to the classpath as follows: Copy the JDBC driver JAR file to the ESBInstallDir /lib/ext directory for each Red Hat Fuse instance. Modify the bin/karaf start script so that it includes the JDBC driver JAR in its CLASSPATH variable. For example, given the JDBC JAR file, JDBCJarFile .jar , you could modify the start script as follows (on a *NIX operating system): Note If you are adding a MySQL driver JAR or a PostgreSQL driver JAR, you must rename the driver JAR by prefixing it with the karaf- prefix. Otherwise, Apache Karaf will hang and the log will tell you that Apache Karaf was unable to find the driver. Configuring a JDBC lock system To configure a JDBC lock system, update the etc/system.properties file for each instance in the primary/secondary deployment as shown Example 14.2. JDBC Lock File Configuration In the example, a database named sample will be created if it does not already exist. The first Red Hat Fuse instance to acquire the locking table is the primary instance. If the connection to the database is lost, the primary instance tries to gracefully shutdown, allowing a secondary instance to become the primary instance when the database service is restored. The former primary instance will require manual restart. Configuring JDBC locking on Oracle If you are using Oracle as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.OracleJDBCLock . Otherwise, configure the system.properties file as normal for your setup, as shown: Example 14.3. JDBC Lock File Configuration for Oracle Note The karaf.lock.jdbc.url requires an active Oracle system ID (SID). This means you must manually create a database instance before using this particular lock. Configuring JDBC locking on Derby If you are using Derby as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file should point to org.apache.karaf.main.lock.DerbyJDBCLock . For example, you could configure the system.properties file as shown: Example 14.4. JDBC Lock File Configuration for Derby Configuring JDBC locking on MySQL If you are using MySQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.MySQLJDBCLock . For example, you could configure the system.properties file as shown: Example 14.5. JDBC Lock File Configuration for MySQL Configuring JDBC locking on PostgreSQL If you are using PostgreSQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.PostgreSQLJDBCLock . For example, you could configure the system.properties file as shown: Example 14.6. JDBC Lock File Configuration for PostgreSQL JDBC lock classes The following JDBC lock classes are currently provided by Apache Karaf: 14.3. Container-level Locking Overview Container-level locking allows bundles to be preloaded into the secondary kernel instance in order to provide faster failover performance. Container-level locking is supported in both the simple file and JDBC locking mechanisms. Configuring container-level locking To implement container-level locking, add the following to the etc/system.properties file on each system in the primary/secondary setup: Example 14.7. Container-level Locking Configuration The karaf.lock.level property tells the Red Hat Fuse instance how far up the boot process to bring the OSGi container. Bundles assigned the same start level or lower will then also be started in that Fuse instance. Bundle start levels are specified in etc/startup.properties , in the format BundleName .jar=level . The core system bundles have levels below 50, where as user bundles have levels greater than 50. Table 14.1. Bundle Start Levels Start Level Behavior 1 A 'cold' standby instance. Core bundles are not loaded into container. Secondary instances will wait until lock acquired to start server. <50 A 'hot' standby instance. Core bundles are loaded into the container. Secondary instances will wait until lock acquired to start user level bundles. The console will be accessible for each secondary instance at this level. >50 This setting is not recommended as user bundles will be started. Avoiding port conflicts When using a 'hot' spare on the same host you need to set the JMX remote port to a unique value to avoid bind conflicts. You can edit the fuse start script (or the karaf script on a child instance) to include the following:
[ "karaf.lock=true karaf.lock.class=org.apache.karaf.main.SimpleFileLock karaf.lock.dir= PathToLockFileDirectory karaf.lock.delay=10000", "# Add the jars in the lib dir for file in \"USDKARAF_HOME\"/lib/karaf*.jar do if [ -z \"USDCLASSPATH\" ]; then CLASSPATH=\"USDfile\" else CLASSPATH=\"USDCLASSPATH:USDfile\" fi done CLASSPATH=\"USDCLASSPATH:USDKARAF_HOME/lib/JDBCJarFile.jar \"", "karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.DefaultJDBCLock karaf.lock.level=50 karaf.lock.delay=10000 karaf.lock.jdbc.url=jdbc:derby://dbserver:1527/sample karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30", "karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.OracleJDBCLock karaf.lock.jdbc.url=jdbc:oracle:thin:@hostname:1521:XE karaf.lock.jdbc.driver=oracle.jdbc.OracleDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30", "karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.DerbyJDBCLock karaf.lock.jdbc.url=jdbc:derby://127.0.0.1:1527/dbname karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30", "karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.MySQLJDBCLock karaf.lock.jdbc.url=jdbc:mysql://127.0.0.1:3306/dbname karaf.lock.jdbc.driver=com.mysql.jdbc.Driver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30", "karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.PostgreSQLJDBCLock karaf.lock.jdbc.url=jdbc:postgresql://127.0.0.1:5432/dbname karaf.lock.jdbc.driver=org.postgresql.Driver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=0", "org.apache.karaf.main.lock.DefaultJDBCLock org.apache.karaf.main.lock.DerbyJDBCLock org.apache.karaf.main.lock.MySQLJDBCLock org.apache.karaf.main.lock.OracleJDBCLock org.apache.karaf.main.lock.PostgreSQLJDBCLock", "karaf.lock=true karaf.lock.level=50 karaf.lock.delay=10000", "DEFAULT_JAVA_OPTS=\"-server USDDEFAULT_JAVA_OPTS -Dcom.sun.management.jmxremote.port=1100 -Dcom.sun.management.jmxremote.authenticate=false\"" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/esbruntimefailover
13.2. The Benefits of Using Hot Rod over Memcached
13.2. The Benefits of Using Hot Rod over Memcached Red Hat JBoss Data Grid offers a choice of protocols for allowing clients to interact with the server in a Remote Client-Server environment. When deciding between using memcached or Hot Rod, the following should be considered. Memcached The memcached protocol causes the server endpoint to use the memcached text wire protocol . The memcached wire protocol has the benefit of being commonly used, and is available for almost any platform. All of JBoss Data Grid's functions, including clustering, state sharing for scalability, and high availability, are available when using memcached. However the memcached protocol lacks dynamicity, resulting in the need to manually update the list of server nodes on your clients in the event one of the nodes in a cluster fails. Also, memcached clients are not aware of the location of the data in the cluster. This means that they will request data from a non-owner node, incurring the penalty of an additional request from that node to the actual owner, before being able to return the data to the client. This is where the Hot Rod protocol is able to provide greater performance than memcached. Hot Rod JBoss Data Grid's Hot Rod protocol is a binary wire protocol that offers all the capabilities of memcached, while also providing better scaling, durability, and elasticity. The Hot Rod protocol does not need the hostnames and ports of each node in the remote cache, whereas memcached requires these parameters to be specified. Hot Rod clients automatically detect changes in the topology of clustered Hot Rod servers; when new nodes join or leave the cluster, clients update their Hot Rod server topology view. Consequently, Hot Rod provides ease of configuration and maintenance, with the advantage of dynamic load balancing and failover. Additionally, the Hot Rod wire protocol uses smart routing when connecting to a distributed cache. This involves sharing a consistent hash algorithm between the server nodes and clients, resulting in faster read and writing capabilities than memcached. Warning When using JCache over Hot Rod it is not possible to create remote clustered caches, as the operation is executed on a single node as opposed to the entire cluster; however, once a cache has been created on the cluster it may be obtained using the cacheManager.getCache method. It is recommended to create caches using either configuration files, JON, or the CLI. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/the_benefits_of_using_hot_rod_over_memcached
Logging
Logging OpenShift Container Platform 4.17 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/logging/index
Chapter 8. Known issues
Chapter 8. Known issues This section describes the known issues in Red Hat OpenShift Data Foundation 4.18. 8.1. Disaster recovery ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release. ( DFBUGS-1748 ) Both the DRPCs protect all the persistent volume claims created on the same namespace The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field. This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies. ( DFBUGS-1749 ) MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups . This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs. Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors. ( DFBUGS-1750 ) Disaster recovery workloads remain stuck when deleted When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod . This might cause delay or failure in garbage collecting dependent DR resources such as the PVC , VolumeReplication , and VolumeReplicationGroup . It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected. Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected. ( DFBUGS-325 ) Regional DR CephFS based application failover show warning about subscription After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers. Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored. ( DFBUGS-253 ) Disabled PeerReady flag prevents changing the action to Failover The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover. Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag. ( DFBUGS-665 ) Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss. Workaround: Shutdown the monitors of any one of the data zone by bringing down the zone nodes. Additionally, you can reset the connection scores of surviving mon pods. As a result, monitors can form a quorum and Ceph becomes available again and IOs resume. ( DFBUGS-425 ) RBD applications fail to Relocate when using stale Ceph pool IDs from replacement cluster For the applications created before the new peer cluster is created, it is not possible to mount the RBD PVC because when a peer cluster is replaced, it is not possible to update the CephBlockPoolID's mapping in the CSI configmap. Workaround: Update the rook-ceph-csi-mapping-config configmap with cephBlockPoolID's mapping on the peer cluster that is not replaced. This enables mounting the RBD PVC for the application. ( DFBUGS-527 ) Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime , thereby causing the trigger of the alert VolumeSynchronizationDelay . This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup. Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster. ( DFBUGS-376 ) MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost. Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed. ( DFBUGS-440 ) Instability of the token-exchange-agent pod after upgrade The token-exchange-agent pod on the managed cluster is unstable as the old deployment resources are not cleaned up properly. This might cause application failover action to fail. Workaround: Refer the knowledgebase article, "token-exchange-agent" pod on managed cluster is unstable after upgrade to ODF 4.17.0 . Result: If the workaround is followed, "token-exchange-agent" pod is stabilized and failover action works as expected. ( DFBUGS-561 ) virtualmachines.kubevirt.io resource fails restore due to mac allocation failure on relocate When a virtual machine is relocated to the preferred cluster, it might fail to complete relocation due to unavailability of the mac address. This happens if the virtual machine is not fully cleaned up on the preferred cluster when it is failed over to the failover cluster. Ensure that the workload is completely removed from the preferred cluster before relocating the workload. ( BZ#2295404 ) Failover process fails when the ReplicationDestination resource has not been created yet If the user initiates a failover before the LastGroupSyncTime is updated, the failover process might fail. This failure is accompanied by an error message indicating that the ReplicationDestination does not exist. Workaround: Edit the ManifestWork for the VRG on the hub cluster. Delete the following section from the manifest: Save the changes. Applying this workaround correctly ensures that the VRG skips attempting to restore the PVC using the ReplicationDestination resource. If the PVC already exists, the application uses it as is. If the PVC does not exist, a new PVC is created. ( DFBUGS-632 ) Ceph in warning state after adding capacity to cluster After device replacement or add capacity procedure it is observed that Ceph is in HEALTH_WARN state with mon reporting slow ops. However, there is no impact to the usability of the cluster. ( DFBUGS-1273 ) OSD pods restart during add capacity OSD pods restarts after performing cluster expansion by adding capacity to the cluster. However, no impact to the cluster is observed apart from pod restarting. ( DFBUGS-1426 ) 8.2. Multicloud Object Gateway NooBaa Core cannot assume role with web identity due to a missing entry in the role's trust policy For OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS), you need to add another entry in the trust policy for noobaa-core account. This is because with the release of OpenShift Data Foundation 4.17, the service account has changed from noobaa to noobaa-core . For instructions to add an entry in the trust policy for noobaa-core account, see the final bullet in the prerequisites section of Updating Red Hat OpenShift Data Foundation 4.16 to 4.17. ( DFBUGS-172 ) Upgrade to OpenShift Data Foundation 4.17 results in noobaa-db pod CrashLoopBackOff state Upgrading to OpenShift Data Foundation 4.17 from OpenShift Data Foundation 4.15 fails when the PostgreSQL upgrade fails in Multicloud Object Gateway which always start with PostgresSQL version 15. If there is a PostgreSQL upgrade failure, the NooBaa-db-pg-0 pod fails to start. Workaround: Refer to the knowledgebase article Recover NooBaa's PostgreSQL upgrade failure in OpenShift Data Foundation 4.17 . ( DFBUGS-1751 ) 8.3. Ceph Poor performance of the stretch clusters on CephFS Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters. ( DFBUGS-1753 ) SELinux relabelling issue with a very high number of files When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251 . ( Jira#3327 ) 8.4. CSI Driver Automatic flattening of snapshots is not working When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC. To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue. If you hit this issue, contact customer support to perform manual flattening of the final restored PVCs to continue to take volume snapshot or clone of the common parent PVC again. ( DFBUGS-1752 ) 8.5. OpenShift Data Foundation console Optimize DRPC creation when multiple workloads are deployed in a single namespace When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement. If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications. Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended. ( DFBUGS-120 ) [NooBaa S3 Browser] Error Loading : char 'n' is not expected.:1:1 Deserialization error After OpenShift Data Foundation upgrade from 4.17 to 4.18, NooBaa Object browser fails to connect to S3 endpoint and the user interface (UI) displays the following error: This error occurs because the Proxy spec of ConsolePlugin (name: odf-console) CR is not getting updated after cluster upgrade from 4.17 to 4.18. Workaround: Run the following command and refresh the UI (browser's tab) after that: As a result, UI can connect to the NooBaa S3 endpoint and browser works as intended. ( DFBUGS-1830 ) 8.6. OCS operator Increasing MDS memory is erasing CPU values when pods are in CLBO state When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes. Workaround: Run the oc patch command to adjust the CPU limits. For example: ( DFBUGS-426 ) ocs-provider-server pod is running even on non RDR clusters ocs-provider-server pod runs even in internal mode due to the changes introduced with an assumption that internal mode switches to use ocs-ocs communication to establish mirroring. Also, this change might be required for the convergence of modes. ( DFBUGS-931 ) Error while reconciling: Service "ocs-provider-server" is invalid: spec.ports[0].nodePort: Invalid value: 31659: provided port is already allocated From OpenShift Data Foundation 4.18, the ocs-oeprator deploys a service with the port 31659 , which might conflict with the existing service nodePort . Due to this any other service cannot use this port if it is already in use. As a result, ocs-oeprator will always error out while deploying the service. This causes the upgrade reconciliation to be stuck. Workaround: Replace nodePort to ClusterIP to avoid the collision: ( DFBUGS-1831 )
[ "/spec/workload/manifests/0/spec/volsync", "char 'n' is not expected.:1:1 Deserialization error: to see the raw response, inspect the hidden field {error}.USDresponse on this object.", "patch ConsolePlugin odf-console --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/proxy/-\", \"value\": {\"alias\": \"s3\", \"authorization\": \"None\", \"endpoint\": {\"service\": {\"name\": \"s3\", \"namespace\": \"openshift-storage\", \"port\": 443}, \"type\": \"Service\"}}}]'", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"3\"}, \"requests\": {\"cpu\": \"3\"}}}}}'", "patch -nopenshift-storage storagecluster ocs-storagecluster --type merge -p '{\"spec\": {\"providerAPIServerServiceType\": \"ClusterIP\"}}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/known-issues
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent Agents in Red Hat Enterprise Linux such as the QEMU guest agent and the SPICE agent can be deployed to help the virtualization tools run more optimally on your system. These agents are described in this chapter. Note To further optimize and tune host and guest performance, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . 11.1. QEMU Guest Agent The QEMU guest agent runs inside the guest and allows the host machine to issue commands to the guest operating system using libvirt, helping with functions such as freezing and thawing filesystems. The guest operating system then responds to those commands asynchronously. The QEMU guest agent package, qemu-guest-agent , is installed by default in Red Hat Enterprise Linux 7. This section covers the libvirt commands and options available to the guest agent. Important Note that it is only safe to rely on the QEMU guest agent when run by trusted guests. An untrusted guest may maliciously ignore or abuse the guest agent protocol, and although built-in safeguards exist to prevent a denial of service attack on the host, the host requires guest co-operation for operations to run as expected. Note that QEMU guest agent can be used to enable and disable virtual CPUs (vCPUs) while the guest is running, thus adjusting the number of vCPUs without using the hot plug and hot unplug features. For more information, see Section 20.36.6, "Configuring Virtual CPU Count" . 11.1.1. Setting up Communication between the QEMU Guest Agent and Host The host machine communicates with the QEMU guest agent through a VirtIO serial connection between the host and guest machines. A VirtIO serial channel is connected to the host via a character device driver (typically a Unix socket), and the guest listens on this serial channel. Note The qemu-guest-agent does not detect if the host is listening to the VirtIO serial channel. However, as the current use for this channel is to listen for host-to-guest events, the probability of a guest virtual machine running into problems by writing to the channel with no listener is very low. Additionally, the qemu-guest-agent protocol includes synchronization markers that allow the host physical machine to force a guest virtual machine back into sync when issuing a command, and libvirt already uses these markers, so that guest virtual machines are able to safely discard any earlier pending undelivered responses. 11.1.1.1. Configuring the QEMU Guest Agent on a Linux Guest The QEMU guest agent can be configured on a running or shut down virtual machine. If configured on a running guest, the guest will start using the guest agent immediately. If the guest is shut down, the QEMU guest agent will be enabled at boot. Either virsh or virt-manager can be used to configure communication between the guest and the QEMU guest agent. The following instructions describe how to configure the QEMU guest agent on a Linux guest. Procedure 11.1. Setting up communication between guest agent and host with virsh on a shut down Linux guest Shut down the virtual machine Ensure the virtual machine (named rhel7 in this example) is shut down before configuring the QEMU guest agent: Add the QEMU guest agent channel to the guest XML configuration Edit the guest's XML file to add the QEMU guest agent details: Add the following to the guest's XML file and save the changes: <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Start the virtual machine Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Alternatively, the QEMU guest agent can be configured on a running guest with the following steps: Procedure 11.2. Setting up communication between guest agent and host on a running Linux guest Create an XML file for the QEMU guest agent # cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Attach the QEMU guest agent to the virtual machine Attach the QEMU guest agent to the running virtual machine (named rhel7 in this example) with this command: Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Procedure 11.3. Setting up communication between the QEMU guest agent and host with virt-manager Shut down the virtual machine Ensure the virtual machine is shut down before configuring the QEMU guest agent. To shut down the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click the light switch icon from the menu bar. Add the QEMU guest agent channel to the guest Open the virtual machine's hardware details by clicking the lightbulb icon at the top of the guest window. Click the Add Hardware button to open the Add New Virtual Hardware window, and select Channel . Select the QEMU guest agent from the Name drop-down list and click Finish : Figure 11.1. Selecting the QEMU guest agent channel device Start the virtual machine To start the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click on the menu bar. Install the QEMU guest agent on the guest Open the guest with virt-manager and install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: The QEMU guest agent is now configured on the rhel7 virtual machine.
[ "virsh shutdown rhel7", "virsh edit rhel7", "<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh start rhel7", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent", "cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh attach-device rhel7 agent.xml", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-qemu_guest_agent
function::user_int64
function::user_int64 Name function::user_int64 - Retrieves a 64-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the 64-bit integer from Description Returns the 64-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_int64:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-int64
Chapter 2. Upgrading Red Hat Satellite
Chapter 2. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.15: Review Section 1.1, "Prerequisites" . Section 2.1, "Satellite Server upgrade considerations" Section 2.3, "Synchronizing the new repositories" Section 2.5, "Upgrading Capsule Servers" 2.1. Satellite Server upgrade considerations This section describes how to upgrade Satellite Server from 6.14 to 6.15. You can upgrade from any minor version of Satellite Server 6.14. Before you begin Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Review and update your firewall configuration prior to upgrading your Satellite Server. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Capsule considerations If you use content views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those content views. Note that Satellite Server upgraded from 6.14 to 6.15 can use Capsule Servers still at 6.14. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrade scenarios You cannot upgrade a self-registered Satellite. You must migrate a self-registered Satellite to the Red Hat Content Delivery Network (CDN) and then perform the upgrade. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . 2.2. Upgrading a connected Satellite Server Use this procedure for a Satellite Server with access to the public internet Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. Upgrade Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . On the Discovered Hosts page, power off and then delete the discovered hosts. From the Select an Organization menu, select each organization in turn and repeat the process to power off and delete the discovered hosts. Make a note to reboot these hosts when the upgrade is complete. Ensure that the Satellite Maintenance repository is enabled: Enable the maintenance module: Check the available versions to confirm the version you want is listed: Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: 2.3. Synchronizing the new repositories You must enable and synchronize the new 6.15 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.15 (for RHEL 8 x86_64) (RPMs) Red Hat Satellite Maintenance 6.15 (for RHEL 8 x86_64) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - BaseOS) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - AppStream) (RPMs) Note If the 6.15 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.15. Note that Red Hat Satellite Client 6 does not have a 6.15 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . 2.4. Performing post-upgrade tasks Optional: If the default provisioning templates have been changed during the upgrade, recreate any templates cloned from the default templates. If the custom code is executed before and/or after the provisioning process, use custom provisioning snippets to avoid recreating cloned templates. For more information about configuring custom provisioning snippets, see Creating Custom Provisioning Snippets in Provisioning hosts . 2.5. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.14 to 6.15. Before you begin You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.15 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 2.3, "Synchronizing the new repositories" . If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Clean yum cache: Synchronize the satellite-capsule-6.15-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. The rubygem-foreman_maintain is installed from the Satellite Maintenance repository or upgraded from the Satellite Maintenance repository if currently installed. Ensure Capsule has access to satellite-maintenance-6.15-for-rhel-8-x86_64-rpms and execute: On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Check the available versions to confirm the version you want is listed: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. Upgrading Capsule Servers using remote execution Create a backup or take a snapshot. For more information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Maintenance Operations . From the Job template list, select Capsule Upgrade Playbook . In the Search Query field, enter the host name of the Capsule. Ensure that Apply to 1 host is displayed in the Resolves to field. In the target_version field, enter the target version of the Capsule. In the whitelist_options field, enter the options. Select the schedule for the job execution in Schedule . In the Type of query section, click Static Query . 2.6. Upgrading the external database You can upgrade an external database from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 while upgrading Satellite from 6.14 to 6.15. Prerequisites Create a new Red Hat Enterprise Linux 8 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 8 documentation. For more information, see Using External Databases with Satellite . Procedure Create a backup. Restore the backup on the new server. If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name:
[ "satellite-maintain service stop", "satellite-maintain service start", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false", "subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "dnf module enable satellite-maintenance:el8", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15", "satellite-maintain upgrade run --target-version 6.15", "dnf needs-restarting --reboothint", "reboot", "yum clean metadata", "satellite-maintain self-upgrade", "grep foreman_url /etc/foreman-proxy/settings.yml", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15", "satellite-maintain upgrade run --target-version 6.15", "dnf needs-restarting --reboothint", "reboot", "satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/upgrading_connected_red_hat_satellite_to_6.15/upgrading_satellite_upgrading-connected
Chapter 4. Installing a cluster on OpenStack with Kuryr
Chapter 4. Installing a cluster on OpenStack with Kuryr Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.14, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . 4.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 4.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 4.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 4.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 4.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 4.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 4.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 4.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 4.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.8.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 4.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 4.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 4.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 4.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 4.10.3. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 4.10.4. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 4.10.5. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 4.10.5.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 4.10.5.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 4.10.6. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 4.10.7. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 4.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 4.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 4.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "ip route add <cluster_network_cidr> via <installer_subnet_gateway>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-installer-kuryr
21.3. Configuring Log Files
21.3. Configuring Log Files For all types of log files, the log creation and log deletion policies have to be configured. The log creation policy sets when a new log file is started, and the log deletion policy sets when an old log file is deleted. 21.3.1. Enabling or Disabling Logs The access and error logging is enabled by default. However, audit and audit fail logging is disabled by default. Note Disabling the access logging can be useful in certain scenarios, because every 2000 accesses to the directory increases the log file by approximately 1 megabyte. However, before turning off access logging, consider that this information can help troubleshooting problems. 21.3.1.1. Enabling or Disabling Logging Using the Command Line Use the dsconf config replace command to modify the parameters in the cn=config subtree that control the Directory Server logging feature: Access log: nsslapd-accesslog-logging-enabled Error log: nsslapd-errorlog-logging-enabled Audit log: nsslapd-auditlog-logging-enabled Audit fail log: nsslapd-auditfaillog-logging-enabled For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . For example, to enable audit logging, enter: 21.3.1.2. Enabling or Disabling Logging Using the Web Console To enable or disable logging in web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select the log type you want to configure under the Logging entry. Enable or disable the logging feature for the selected log type. Optionally, set additional parameters to define, for example, a log rotation or log deletion policy. Click Save . 21.3.2. Configuring Plug-in-specific Logging For debugging, you can enable access and audit logging for operations a plug-ins executes. For details, see the nsslapd-logAccess and nsslapd-logAudit parameter in the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . 21.3.3. Disabling High-resolution Log Time Stamps Using the default settings, Directory Server logs entries with nanosecond precision: To disable high-resolution log time stamps: Note The option to disable high-resolution log time stamps is deprecated and will be removed in a future release. After disabling high-resolution log time stamps, Directory Server logs with second precision only: 21.3.4. Defining a Log File Rotation Policy To periodically archive the current log file and create a new one, set a log file rotation policy. You can update the settings in the cn=config subtree using the command line or the web console. You can set the following configuration parameters to control the log file rotation policy: Access mode The access mode sets the file permissions on newly created log files. Access log: nsslapd-accesslog-mode Error log: nsslapd-errorlog-mode Audit log: nsslapd-auditlog-mode Audit fail log: nsslapd-auditfaillog-mode Maximum number of logs Sets the maximum number of log files to keep. When the number of files is reached, Directory Server deletes the oldest log file before creating the new one. Access log: nsslapd-accesslog-maxlogsperdir Error log: nsslapd-errorlog-maxlogsperdir Audit log: nsslapd-auditlog-maxlogsperdir Audit fail log: nsslapd-auditfaillog-maxlogsperdir File size for each log Sets the maximum size of a log file in megabytes before it is rotated. Access log: nsslapd-accesslog-maxlogsize Error log: nsslapd-errorlog-maxlogsize Audit log: nsslapd-auditlog-maxlogsize Audit fail log: nsslapd-auditfaillog-maxlogsize Create a log every Sets the maximum age of a log file. nsslapd-accesslog-logrotationtime and nsslapd-accesslog-logrotationtimeunit nsslapd-errorlog-logrotationtime and nsslapd-errorlog-logrotationtimeunit nsslapd-auditlog-logrotationtime and nsslapd-auditlog-logrotationtimeunit nsslapd-auditfaillog-logrotationtime and nsslapd-auditfaillog-logrotationtimeunit Additionally, you can set the time when the log file is rotated using the following parameters: nsslapd-accesslog-logrotationsynchour and nsslapd-accesslog-logrotationsyncmin nsslapd-errorlog-logrotationsynchour and nsslapd-errorlog-logrotationsyncmin nsslapd-auditlog-logrotationsynchour and nsslapd-auditlog-logrotationsyncmin nsslapd-auditfaillog-logrotationsynchour and nsslapd-auditfaillog-logrotationsyncmin For details, see the parameter descriptions in the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . Each log file starts with a title, which identifies the server version, host name, and port, for ease of archiving or exchanging log files. For example: 21.3.4.1. Defining a Log File Rotation Policy Using the Command Line Use the dsconf config replace command to modify parameters controlling the Directory Server logging features. For example for the error log, to set access mode 600 , to keep maximum 2 , and to rotate log files at a size of 100 MB or every 5 days , enter: 21.3.4.2. Defining a Log File Rotation Policy Using the Web Console See Section 21.3.1.2, "Enabling or Disabling Logging Using the Web Console" . 21.3.5. Defining a Log File Deletion Policy Directory Server automatically deletes old archived log files, if you set a Deletion Policy . Note You can only set a log file deletion policy if you have a log file rotation policy set. Directory Server applies the deletion policy at the time of log rotation. You can set the following configuration parameters to control the log file deletion policy: Total log size If the size of all access, error, audit or audit fail log files increases the configured value, the oldest log file is automatically deleted. Access log: nsslapd-accesslog-logmaxdiskspace Error log: nsslapd-errorlog-logmaxdiskspace Audit log: nsslapd-auditlog-logmaxdiskspace Audit log: nsslapd-auditfaillog-logmaxdiskspace Free disk space is less than When the free disk space reaches this value, the oldest archived log file is automatically deleted. Access log: nsslapd-accesslog-logminfreediskspace Error log: nsslapd-errorlog-logminfreediskspace Audit log: nsslapd-auditlog-logminfreediskspace Audit log: nsslapd-auditfaillog-logminfreediskspace When a file is older than a specified time When a log file is older than the configured time, it is automatically deleted. Access log: nsslapd-accesslog-logexpirationtime and nsslapd-accesslog-logexpirationtimeunit Error log: nsslapd-errorlog-logminfreediskspace and nsslapd-errorlog-logexpirationtimeunit Audit log: nsslapd-auditlog-logminfreediskspace and nsslapd-auditlog-logexpirationtimeunit Audit log: nsslapd-auditfaillog-logminfreediskspace and nsslapd-auditfaillog-logexpirationtimeunit For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . 21.3.5.1. Configuring a Log Deletion Policy Using the Command Line Use the dsconf config replace command to modify parameters controlling the Directory Server logging features. For example, to auto-delete the oldest access log file if the total size of all access log files increases 500 MB, run: 21.3.5.2. Configuring a Log Deletion Policy Using the Web Console See Section 21.3.1.2, "Enabling or Disabling Logging Using the Web Console" . 21.3.6. Manual Log File Rotation The Directory Server supports automatic log file rotation for all three logs. However, it is possible to rotate log files manually if there are no automatic log file creation or deletion policies configured. By default, access, error, audit and audit fail log files can be found in the following location: To rotate log files manually: Stop the instance. Move or rename the log file being rotated so that the old log file is available for future reference. Start the instance: 21.3.7. Configuring the Log Levels Both the access and the error log can record different amounts of information, depending on the log level that is set. You can set the following configuration parameters to control the log levels for the: Access log: nsslapd-accesslog-level Error log: nsslapd-errorlog-level For further details and a list of the supported log levels, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . Note Changing the log level from the default can cause the log file to grow very rapidly. Red Hat recommends not to change the default values without being asked to do so by the Red Hat technical support. 21.3.7.1. Configuring the Log Levels Using the Command Line Use the dsconf config replace command to set the log level. For example, to enable search filter logging ( 32 ) and config file processing ( 64 ), set the nsslapd-errorlog-level parameter to 96 (32 + 64): For example, to enable internal access operations logging ( 4 ) and logging of connections, operations, and results ( 256 ), set the nsslapd-accesslog-level parameter to 260 (4 + 256): 21.3.7.2. Configuring the Log Levels Using the Web Console To configure the access and error log level using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. To configure: The access log level: Open the Server Settings Logging Access Log menu. Select the log levels in the Access Logging Levels section. For example: The error log level: Open the Server Settings Logging Error Log menu. Select the log levels in the Error Logging Levels section. For example: Click Save . 21.3.7.3. Logging Internal Operations Several operations cause additional internal operations in Directory Server. For example, if a user deletes an entry, the server runs several internal operations, such as locating the entry and updating groups in which the user was a member. This section explains the format of internal operations log entries. For details about setting the log level, see Section 21.3.7, "Configuring the Log Levels" . Directory Server provides the following formats of internal operations logging: Server-initiated Internal Operations Example of an internal operation log entry that was initiated by the server: For log entries of this type: The conn field is set to Internal followed by (0) . The op field is set to 0(0)( nesting_level ) . For server-initiated internal operations, both the operation ID and internal operation ID are always 0 . For log entries that are not nested, the nesting level is 0 . Client-initiated Internal Operations Example of an internal operation log entry that was initiated by a client: For log entries of this type: The conn field is set to the client connection ID, followed by the string (Internal) . The op field contains the operation ID, followed by ( internal_operation_ID )( nesting_level ) . The internal operation ID can vary, and log entries that are not nested, the nesting level is 0 . If the nsslapd-plugin-logging parameter is set to on and internal operations logging is enabled, Directory Server additionally logs internal operations of plug-ins. Example 21.1. Internal Operations Log Entries with Plug-in Logging Enabled If you delete the uid=user,dc=example,dc=com entry, and the Referential Integrity plug-in automatically deletes this entry from the example group, the server logs: 21.3.8. Disabling Access Log Buffering for Debugging For debugging purposes, you can disable access log buffering, which is enabled by default. With access log buffering disabled, Directory Server writes log entries directly to the disk. Important Do not disable access logging in a normal operating environment. When you disable the buffering, Directory Server performance decreases, especially under heavy load. 21.3.8.1. Disabling Access Log Buffering Using the Command Line To disable access log buffering using the command line: Set the nsslapd-accesslog-logbuffering parameter to off : 21.3.8.2. Disabling Access Log Buffering Using the Web Console To disable access log buffering using the Web Console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open Server Settings Logging Access Log . Select Disable Access Log Buffering . Click Save Configuration .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-auditlog-logging-enabled=on", "[27/May/2016:17:52:04.754335904 -0500] schemareload - Schema validation passed. [27/May/2016:17:52:04.894255328 -0500] schemareload - Schema reload task finished.", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-logging-hr-timestamps-enabled=off", "[27/May/2016:17:52:04 -0500] schemareload - Schema validation passed. [27/May/2016:17:52:04 -0500] schemareload - Schema reload task finished.", "389-Directory/1.4.0.11 B2018.197.1151 server.example.com : 389 (/etc/dirsrv/slapd- instance )", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-mode=600 nsslapd-errorlog-maxlogsperdir=2 nsslapd-errorlog-maxlogsize=100 nsslapd-errorlog-logrotationtime=5 nsslapd-errorlog-logrotationtimeunit=day", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-logmaxdiskspace=500", "/var/log/dirsrv/slapd- instance", "dsctl instance_name stop", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-level=96", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-level=260", "[14/Jan/2021:09:45:25.814158882 -0400] conn=Internal ( 0 ) op=0( 0 )( 0 ) MOD dn=\"cn=uniqueid generator,cn=config\" [14/Jan/2021:09:45:25.822103183 -0400] conn=Internal ( 0 ) op=0( 0 )( 0 ) RESULT err=0 tag=48 nentries=0 etime=0.0007968796", "[14/Jan/2021:09:45:14.382918693 -0400] conn=5 (Internal) op= 15 ( 1 )( 0 ) SRCH base=\"cn=config,cn=userroot,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [14/Jan/2021:09:45:14.383191380 -0400] conn=5 (Internal) op= 15 ( 1 )( 0 ) RESULT err=0 tag=48 nentries=0 etime=0.0000295419 [14/Jan/2021:09:45:14.383216269 -0400] conn=5 (Internal) op= 15 ( 2 )( 0 ) SRCH base=\"cn=config,cn=example,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [14/Jan/2021:09:45:14.383449419 -0400] conn=5 (Internal) op= 15 ( 2 )( 0 ) RESULT err=0", "[ time_stamp ] conn=2 op=37 DEL dn=\"uid=user,dc=example,dc=com\" [ time_stamp ] conn=2 (Internal) op=37(1) SRCH base=\"uid=user,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=*)(objectclass=ldapsubentry))\" attrs=ALL [ time_stamp ] conn=2 (Internal) op=37(1) RESULT err=0 tag=48 nentries=1 etime=0.0000129148 [ time_stamp ] conn=2 (Internal) op=37(2) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [ time_stamp ] conn=2 (Internal) op=37(2) RESULT err=0 tag=48 nentries=0 etime=0.0000123162 [ time_stamp ] conn=2 (Internal) op=37(3) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [ time_stamp ] conn=2 (Internal) op=37(3) RESULT err=0 tag=48 nentries=1 etime=0.0000128104 [ time_stamp ] conn=2 (Internal) op=37(4) MOD dn=\"cn=example,dc=example,dc=com\" [ time_stamp ] conn=2 (Internal) op=37(5) SRCH base=\"cn=example,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=*)(objectclass=ldapsubentry))\" attrs=ALL [ time_stamp ] conn=2 (Internal) op=37(5) RESULT err=0 tag=48 nentries=1 etime=0.0000130685 [ time_stamp ] conn=2 (Internal) op=37(4) RESULT err=0 tag=48 nentries=0 etime=0.0005217545 [ time_stamp ] conn=2 (Internal) op=37(6) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [ time_stamp ] conn=2 (Internal) op=37(6) RESULT err=0 tag=48 nentries=0 etime=0.0000137656 [ time_stamp ] conn=2 (Internal) op=37(7) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [ time_stamp ] conn=2 (Internal) op=37(7) RESULT err=0 tag=48 nentries=0 etime=0.0000066978 [ time_stamp ] conn=2 (Internal) op=37(8) SRCH base=\"o=example\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [ time_stamp ] conn=2 (Internal) op=37(8) RESULT err=0 tag=48 nentries=0 etime=0.0000063316 [ time_stamp ] conn=2 (Internal) op=37(9) SRCH base=\"o=example\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [ time_stamp ] conn=2 (Internal) op=37(9) RESULT err=0 tag=48 nentries=0 etime=0.0000048634 [ time_stamp ] conn=2 (Internal) op=37(10) SRCH base=\"o=example\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [ time_stamp ] conn=2 (Internal) op=37(10) RESULT err=0 tag=48 nentries=0 etime=0.0000048854 [ time_stamp ] conn=2 (Internal) op=37(11) SRCH base=\"o=example\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [ time_stamp ] conn=2 (Internal) op=37(11) RESULT err=0 tag=48 nentries=0 etime=0.0000046522 [ time_stamp ] conn=2 op=37 RESULT err=0 tag=107 nentries=0 etime=0.0010297858", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-logbuffering=off" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Configuring_Logs
Chapter 15. Image-based upgrade for single-node OpenShift clusters
Chapter 15. Image-based upgrade for single-node OpenShift clusters 15.1. Understanding the image-based upgrade for single-node OpenShift clusters From OpenShift Container Platform 4.14.13, the Lifecycle Agent provides you with an alternative way to upgrade the platform version of a single-node OpenShift cluster. The image-based upgrade is faster than the standard upgrade method and allows you to directly upgrade from OpenShift Container Platform <4.y> to <4.y+2>, and <4.y.z> to <4.y.z+n>. This upgrade method utilizes a generated OCI image from a dedicated seed cluster that is installed on the target single-node OpenShift cluster as a new ostree stateroot. A seed cluster is a single-node OpenShift cluster deployed with the target OpenShift Container Platform version, Day 2 Operators, and configurations that are common to all target clusters. You can use the seed image, which is generated from the seed cluster, to upgrade the platform version on any single-node OpenShift cluster that has the same combination of hardware, Day 2 Operators, and cluster configuration as the seed cluster. Important The image-based upgrade uses custom images that are specific to the hardware platform that the clusters are running on. Each different hardware platform requires a separate seed image. The Lifecycle Agent uses two custom resources (CRs) on the participating clusters to orchestrate the upgrade: On the seed cluster, the SeedGenerator CR allows for the seed image generation. This CR specifies the repository to push the seed image to. On the target cluster, the ImageBasedUpgrade CR specifies the seed image for the upgrade of the target cluster and the backup configurations for your workloads. Example SeedGenerator CR apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage spec: seedImage: <seed_image> Example ImageBasedUpgrade CR apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle 1 seedImageRef: 2 version: <target_version> image: <seed_container_image> pullSecretRef: name: <seed_pull_secret> autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 3 extraManifests: 4 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 5 - name: oadp-cm-example namespace: openshift-adp 1 Defines the desired stage for the ImageBasedUpgrade CR. The value can be Idle , Prep , Upgrade , or Rollback . 2 Defines the target platform version, the seed image to be used, and the secret required to access the image. 3 (Optional) Specify the time frame in seconds to roll back when the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. 4 (Optional) Specify the list of ConfigMap resources that contain your custom catalog sources to retain after the upgrade, and your extra manifests to apply to the target cluster that are not part of the seed image. 5 Specify the list of ConfigMap resources that contain the OADP Backup and Restore CRs. 15.1.1. Stages of the image-based upgrade After generating the seed image on the seed cluster, you can move through the stages on the target cluster by setting the spec.stage field to one of the following values in the ImageBasedUpgrade CR: Idle Prep Upgrade Rollback (Optional) Figure 15.1. Stages of the image-based upgrade 15.1.1.1. Idle stage The Lifecycle Agent creates an ImageBasedUpgrade CR set to stage: Idle when the Operator is first deployed. This is the default stage. There is no ongoing upgrade and the cluster is ready to move to the Prep stage. Figure 15.2. Transition from Idle stage You also move to the Idle stage to do one of the following steps: Finalize a successful upgrade Finalize a rollback Cancel an ongoing upgrade until the pre-pivot phase in the Upgrade stage Moving to the Idle stage ensures that the Lifecycle Agent cleans up resources, so that the cluster is ready for upgrades again. Figure 15.3. Transitions to Idle stage Important If using RHACM when you cancel an upgrade, you must remove the import.open-cluster-management.io/disable-auto-import annotation from the target managed cluster to re-enable the automatic import of the cluster. 15.1.1.2. Prep stage Note You can complete this stage before a scheduled maintenance window. For the Prep stage, you specify the following upgrade details in the ImageBasedUpgrade CR: seed image to use resources to back up extra manifests to apply and custom catalog sources to retain after the upgrade, if any Then, based on what you specify, the Lifecycle Agent prepares for the upgrade without impacting the current running version. During this stage, the Lifecycle Agent ensures that the target cluster is ready to proceed to the Upgrade stage by checking if it meets certain conditions. The Operator pulls the seed image to the target cluster with additional container images specified in the seed image. The Lifecycle Agent checks if there is enough space on the container storage disk and if necessary, the Operator deletes unpinned images until the disk usage is below the specified threshold. For more information about how to configure or disable the cleaning up of the container storage disk, see "Configuring the automatic image cleanup of the container storage disk". You also prepare backup resources with the OADP Operator's Backup and Restore CRs. These CRs are used in the Upgrade stage to reconfigure the cluster, register the cluster with RHACM, and restore application artifacts. In addition to the OADP Operator, the Lifecycle Agent uses the ostree versioning system to create a backup, which allows complete cluster reconfiguration after both upgrade and rollback. After the Prep stage finishes, you can cancel the upgrade process by moving to the Idle stage or you can start the upgrade by moving to the Upgrade stage in the ImageBasedUpgrade CR. If you cancel the upgrade, the Operator performs cleanup operations. Figure 15.4. Transition from Prep stage 15.1.1.3. Upgrade stage The Upgrade stage consists of two phases: pre-pivot Just before pivoting to the new stateroot, the Lifecycle Agent collects the required cluster specific artifacts and stores them in the new stateroot. The backup of your cluster resources specified in the Prep stage are created on a compatible Object storage solution. The Lifecycle Agent exports CRs specified in the extraManifests field in the ImageBasedUpgrade CR or the CRs described in the ZTP policies that are bound to the target cluster. After pre-pivot phase has completed, the Lifecycle Agent sets the new stateroot deployment as the default boot entry and reboots the node. post-pivot After booting from the new stateroot, the Lifecycle Agent also regenerates the seed image's cluster cryptography. This ensures that each single-node OpenShift cluster upgraded with the same seed image has unique and valid cryptographic objects. The Operator then reconfigures the cluster by applying cluster-specific artifacts that were collected in the pre-pivot phase. The Operator applies all saved CRs, and restores the backups. After the upgrade has completed and you are satisfied with the changes, you can finalize the upgrade by moving to the Idle stage. Important When you finalize the upgrade, you cannot roll back to the original release. Figure 15.5. Transitions from Upgrade stage If you want to cancel the upgrade, you can do so until the pre-pivot phase of the Upgrade stage. If you encounter issues after the upgrade, you can move to the Rollback stage for a manual rollback. 15.1.1.4. Rollback stage The Rollback stage can be initiated manually or automatically upon failure. During the Rollback stage, the Lifecycle Agent sets the original ostree stateroot deployment as default. Then, the node reboots with the release of OpenShift Container Platform and application configurations. Warning If you move to the Idle stage after a rollback, the Lifecycle Agent cleans up resources that can be used to troubleshoot a failed upgrade. The Lifecycle Agent initiates an automatic rollback if the upgrade does not complete within a specified time limit. For more information about the automatic rollback, see the "Moving to the Rollback stage with Lifecycle Agent" or "Moving to the Rollback stage with Lifecycle Agent and GitOps ZTP" sections. Figure 15.6. Transition from Rollback stage Additional resources Configuring the automatic image cleanup of the container storage disk Performing an image-based upgrade for single-node OpenShift clusters with Lifecycle Agent Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP 15.1.2. Guidelines for the image-based upgrade For a successful image-based upgrade, your deployments must meet certain requirements. There are different deployment methods in which you can perform the image-based upgrade: GitOps ZTP You use the GitOps Zero Touch Provisioning (ZTP) to deploy and configure your clusters. Non-GitOps You manually deploy and configure your clusters. You can perform an image-based upgrade in disconnected environments. For more information about how to mirror images for a disconnected environment, see "Mirroring images for a disconnected installation". Additional resources Mirroring images for a disconnected installation 15.1.2.1. Minimum software version of components Depending on your deployment method, the image-based upgrade requires the following minimum software versions. Table 15.1. Minimum software version of components Component Software version Required Lifecycle Agent 4.16 Yes OADP Operator 1.4.1 Yes Managed cluster version 4.14.13 Yes Hub cluster version 4.16 No RHACM 2.10.2 No GitOps ZTP plugin 4.16 Only for GitOps ZTP deployment method Red Hat OpenShift GitOps 1.12 Only for GitOps ZTP deployment method Topology Aware Lifecycle Manager (TALM) 4.16 Only for GitOps ZTP deployment method Local Storage Operator [1] 4.14 Yes Logical Volume Manager (LVM) Storage [1] 4.14.2 Yes The persistent storage must be provided by either the LVM Storage or the Local Storage Operator, not both. 15.1.2.2. Hub cluster guidelines If you are using Red Hat Advanced Cluster Management (RHACM), your hub cluster needs to meet the following conditions: To avoid including any RHACM resources in your seed image, you need to disable all optional RHACM add-ons before generating the seed image. Your hub cluster must be upgraded to at least the target version before performing an image-based upgrade on a target single-node OpenShift cluster. 15.1.2.3. Seed image guidelines The seed image targets a set of single-node OpenShift clusters with the same hardware and similar configuration. This means that the seed cluster must match the configuration of the target clusters for the following items: CPU topology Number of CPU cores Tuned performance configuration, such as number of reserved CPUs MachineConfig resources for the target cluster IP version Note Dual-stack networking is not supported in this release. Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator Disconnected registry FIPS configuration The following configurations only have to partially match on the participating clusters: If the target cluster has a proxy configuration, the seed cluster must have a proxy configuration too but the configuration does not have to be the same. A dedicated partition on the primary disk for container storage is required on all participating clusters. However, the size and start of the partition does not have to be the same. Only the spec.config.storage.disks.partitions.label: varlibcontainers label in the MachineConfig CR must match on both the seed and target clusters. For more information about how to create the disk partition, see "Configuring a shared container partition between ostree stateroots" or "Configuring a shared container partition between ostree stateroots when using GitOps ZTP". For more information about what to include in the seed image, see "Seed image configuration" and "Seed image configuration using the RAN DU profile". Additional resources Configuring a shared container partition between ostree stateroots Configuring a shared container partition between ostree stateroots when using GitOps ZTP Seed image configuration 15.1.2.4. OADP backup and restore guidelines With the OADP Operator, you can back up and restore your applications on your target clusters by using Backup and Restore CRs wrapped in ConfigMap objects. The application must work on the current and the target OpenShift Container Platform versions so that they can be restored after the upgrade. The backups must include resources that were initially created. The following resources must be excluded from the backup: pods endpoints controllerrevision podmetrics packagemanifest replicaset localvolume , if using Local Storage Operator (LSO) There are two local storage implementations for single-node OpenShift: Local Storage Operator (LSO) The Lifecycle Agent automatically backs up and restores the required artifacts, including localvolume resources and their associated StorageClass resources. You must exclude the persistentvolumes resource in the application Backup CR. LVM Storage You must create the Backup and Restore CRs for LVM Storage artifacts. You must include the persistentVolumes resource in the application Backup CR. For the image-based upgrade, only one Operator is supported on a given target cluster. Important For both Operators, you must not apply the Operator CRs as extra manifests through the ImageBasedUpgrade CR. The persistent volume contents are preserved and used after the pivot. When you are configuring the DataProtectionApplication CR, you must ensure that the .spec.configuration.restic.enable is set to false for an image-based upgrade. This disables Container Storage Interface integration. 15.1.2.4.1. lca.openshift.io/apply-wave guidelines The lca.openshift.io/apply-wave annotation determines the apply order of Backup or Restore CRs. The value of the annotation must be a string number. If you define the lca.openshift.io/apply-wave annotation in the Backup or Restore CRs, they are applied in increasing order based on the annotation value. If you do not define the annotation, they are applied together. The lca.openshift.io/apply-wave annotation must be numerically lower in your platform Restore CRs, for example RHACM and LVM Storage artifacts, than that of the application. This way, the platform artifacts are restored before your applications. If your application includes cluster-scoped resources, you must create separate Backup and Restore CRs to scope the backup to the specific cluster-scoped resources created by the application. The Restore CR for the cluster-scoped resources must be restored before the remaining application Restore CR(s). 15.1.2.4.2. lca.openshift.io/apply-label guidelines You can back up specific resources exclusively with the lca.openshift.io/apply-label annotation. Based on which resources you define in the annotation, the Lifecycle Agent applies the lca.openshift.io/backup: <backup_name> label and adds the labelSelector.matchLabels.lca.openshift.io/backup: <backup_name> label selector to the specified resources when creating the Backup CRs. To use the lca.openshift.io/apply-label annotation for backing up specific resources, the resources listed in the annotation must also be included in the spec section. If the lca.openshift.io/apply-label annotation is used in the Backup CR, only the resources listed in the annotation are backed up, even if other resource types are specified in the spec section or not. Example CR apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet namespace: openshift-adp annotations: lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet 1 labels: velero.io/storage-location: default spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - clusterroles includedNamespaceScopedResources: - deployments 1 The value must be a list of comma-separated objects in group/version/resource/name format for cluster-scoped resources or group/version/resource/namespace/name format for namespace-scoped resources, and it must be attached to the related Backup CR. 15.1.2.5. Extra manifest guidelines The Lifecycle Agent uses extra manifests to restore your target clusters after rebooting with the new stateroot deployment and before restoring application artifacts. Different deployment methods require a different way to apply the extra manifests: GitOps ZTP You use the lca.openshift.io/target-ocp-version: <target_ocp_version> label to mark the extra manifests that the Lifecycle Agent must extract and apply after the pivot. You can specify the number of manifests labeled with lca.openshift.io/target-ocp-version by using the lca.openshift.io/target-ocp-version-manifest-count annotation in the ImageBasedUpgrade CR. If specified, the Lifecycle Agent verifies that the number of manifests extracted from policies matches the number provided in the annotation during the prep and upgrade stages. Example for the lca.openshift.io/target-ocp-version-manifest-count annotation apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: annotations: lca.openshift.io/target-ocp-version-manifest-count: "5" name: upgrade Non-Gitops You mark your extra manifests with the lca.openshift.io/apply-wave annotation to determine the apply order. The labeled extra manifests are wrapped in ConfigMap objects and referenced in the ImageBasedUpgrade CR that the Lifecycle Agent uses after the pivot. If the target cluster uses custom catalog sources, you must include them as extra manifests that point to the correct release version. Important You cannot apply the following items as extra manifests: MachineConfig objects OLM Operator subscriptions Additional resources Performing an image-based upgrade for single-node OpenShift clusters with Lifecycle Agent Preparing the hub cluster for ZTP Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent Creating ConfigMap objects for the image-based upgrade with GitOps ZTP About installing OADP 15.2. Preparing for an image-based upgrade for single-node OpenShift clusters 15.2.1. Configuring a shared container partition for the image-based upgrade Your single-node OpenShift clusters need to have a shared /var/lib/containers partition for the image-based upgrade. You can do this at install time. 15.2.1.1. Configuring a shared container partition between ostree stateroots Apply a MachineConfig to both the seed and the target clusters during installation time to create a separate partition and share the /var/lib/containers partition between the two ostree stateroots that will be used during the upgrade process. Important You must complete this procedure at installation time. Procedure Apply a MachineConfig to create a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service After=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation will fail. 3 Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail. 15.2.1.2. Configuring a shared container directory between ostree stateroots when using GitOps ZTP When you are using the GitOps Zero Touch Provisioning (ZTP) workflow, you do the following procedure to create a separate disk partition on both the seed and target cluster and to share the /var/lib/containers partition. Important You must complete this procedure at installation time. Prerequisites Install Butane. For more information, see "Installing Butane". Procedure Create the storage.bu file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation will fail. 3 Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail. Convert the storage.bu to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR: [...] spec: clusters: - nodes: - hostName: <name> ignitionConfigOverride: '{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}' [...] Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]' Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status by running the following commands: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 Additional resources Installing Butane 15.2.2. Installing Operators for the image-based upgrade Prepare your clusters for the upgrade by installing the Lifecycle Agent and the OADP Operator. To install the OADP Operator with the non-GitOps method, see "Installing the OADP Operator". Additional resources Installing the OADP Operator About backup and snapshot locations and their secrets Creating a Backup CR Creating a Restore CR 15.2.2.1. Installing the Lifecycle Agent by using the CLI You can use the OpenShift CLI ( oc ) to install the Lifecycle Agent. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace object YAML file for the Lifecycle Agent, for example lcao-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management Create the Namespace CR by running the following command: USD oc create -f lcao-namespace.yaml Create an OperatorGroup object YAML file for the Lifecycle Agent, for example lcao-operatorgroup.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent Create the OperatorGroup CR by running the following command: USD oc create -f lcao-operatorgroup.yaml Create a Subscription CR, for example, lcao-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f lcao-subscription.yaml Verification To verify that the installation succeeded, inspect the CSV resource by running the following command: USD oc get csv -n openshift-lifecycle-agent Example output NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.16.0 Openshift Lifecycle Agent 4.16.0 Succeeded Verify that the Lifecycle Agent is up and running by running the following command: USD oc get deploy -n openshift-lifecycle-agent Example output NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s 15.2.2.2. Installing the Lifecycle Agent by using the web console You can use the OpenShift Container Platform web console to install the Lifecycle Agent. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Lifecycle Agent from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent . Click Install . Verification To confirm that the installation is successful: Click Operators Installed Operators . Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator is not installed successfully: Click Operators Installed Operators , and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Click Workloads Pods , and check the logs for pods in the openshift-lifecycle-agent project. 15.2.2.3. Installing the Lifecycle Agent with GitOps ZTP Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade. Procedure Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example LcaSubscriptionNS.yaml file apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-lifecycle-agent Example LcaSubscriptionOperGroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-lifecycle-agent Example LcaSubscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Example directory structure ├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml Add the CRs to your common PolicyGenTemplate : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-common-latest" namespace: "ztp-common" spec: bindingRules: common: "true" du-profile: "latest" sourceFiles: - fileName: LcaSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: LcaSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: LcaSubscription.yaml policyName: "subscriptions-policy" [...] 15.2.2.4. Installing and configuring the OADP Operator with GitOps ZTP Install and configure the OADP Operator with GitOps ZTP before starting the upgrade. Procedure Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example OadpSubscriptionNS.yaml file apiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-adp Example OadpSubscriptionOperGroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-adp Example OadpSubscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Example OadpOperatorStatus.yaml file apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: "False" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: "True" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: "True" reason: InstallSucceeded Example directory structure ├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml Add the CRs to your common PolicyGenTemplate : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-common-latest" namespace: "ztp-common" spec: bindingRules: common: "true" du-profile: "latest" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: OadpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: OadpSubscription.yaml policyName: "subscriptions-policy" - fileName: OadpOperatorStatus.yaml policyName: "subscriptions-policy" [...] Create the DataProtectionApplication CR and the S3 secret only for the target cluster: Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example DataProtectionApplication.yaml file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: configuration: restic: enable: false 1 velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: "default" region: minio s3Url: USDurl insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: USDbucketName 2 prefix: USDprefixName 3 status: conditions: - reason: Complete status: "True" type: Reconciled 1 The spec.configuration.restic.enable field must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade. 2 3 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . Example OadpSecret.yaml file apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" type: Opaque Example OadpBackupStorageLocationStatus.yaml file apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" status: phase: Available The OadpBackupStorageLocationStatus.yaml CR verifies the availability of backup storage locations created by OADP. Add the CRs to your site PolicyGenTemplate with overrides: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-cnf" namespace: "ztp-site" spec: bindingRules: sites: "example-cnf" du-profile: "latest" mcp: "master" sourceFiles: ... - fileName: OadpSecret.yaml policyName: "config-policy" data: cloud: <your_credentials> 1 - fileName: DataProtectionApplication.yaml policyName: "config-policy" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> 2 profile: "default" insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> 3 prefix: <cluster_name> 4 - fileName: OadpBackupStorageLocationStatus.yaml policyName: "config-policy" 1 Specify your credentials for your S3 storage backend. 2 Specify the URL for your S3-compatible bucket. 3 4 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket . The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . 15.2.3. Generating a seed image for the image-based upgrade with the Lifecycle Agent Use the Lifecycle Agent to generate the seed image with the SeedGenerator custom resource (CR). 15.2.3.1. Seed image configuration The seed image targets a set of single-node OpenShift clusters with the same hardware and similar configuration. This means that the seed image must have all of the components and configuration that the seed cluster shares with the target clusters. Therefore, the seed image generated from the seed cluster cannot contain any cluster-specific configuration. The following table lists the components, resources, and configurations that you must and must not include in your seed image: Table 15.2. Seed image configuration Cluster configuration Include in seed image Performance profile Yes MachineConfig resources for the target cluster Yes IP version [1] Yes Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator Yes Disconnected registry configuration [2] Yes Valid proxy configuration [3] Yes FIPS configuration Yes Dedicated partition on the primary disk for container storage that matches the size of the target clusters Yes Local volumes StorageClass used in LocalVolume for LSO LocalVolume for LSO LVMCluster CR for LVMS No OADP DataProtectionApplication CR No Dual-stack networking is not supported in this release. If the seed cluster is installed in a disconnected environment, the target clusters must also be installed in a disconnected environment. The proxy configuration does not have to be the same. 15.2.3.1.1. Seed image configuration using the RAN DU profile The following table lists the components, resources, and configurations that you must and must not include in the seed image when using the RAN DU profile: Table 15.3. Seed image configuration with RAN DU profile Resource Include in seed image All extra manifests that are applied as part of Day 0 installation Yes All Day 2 Operator subscriptions Yes ClusterLogging.yaml Yes DisableOLMPprof.yaml Yes TunedPerformancePatch.yaml Yes PerformanceProfile.yaml Yes SriovOperatorConfig.yaml Yes DisableSnoNetworkDiag.yaml Yes StorageClass.yaml No, if it is used in StorageLV.yaml StorageLV.yaml No StorageLVMCluster.yaml No Table 15.4. Seed image configuration with RAN DU profile for extra manifests Resource Apply as extra manifest ClusterLogForwarder.yaml Yes ReduceMonitoringFootprint.yaml Yes SriovFecClusterConfig.yaml Yes PtpOperatorConfigForEvent.yaml Yes DefaultCatsrc.yaml Yes PtpConfig.yaml If the interfaces of the target cluster are common with the seed cluster, you can include them in the seed image. Otherwise, apply it as extra manifests. SriovNetwork.yaml SriovNetworkNodePolicy.yaml If the configuration, including namespaces, is exactly the same on both the seed and target cluster, you can include them in the seed image. Otherwise, apply them as extra manifests. 15.2.3.2. Generating a seed image with the Lifecycle Agent Use the Lifecycle Agent to generate the seed image with the SeedGenerator CR. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks: Stopping cluster Operators Preparing the seed image configuration Generating and pushing the seed image to the image repository specified in the SeedGenerator CR Restoring cluster Operators Expiring seed cluster certificates Generating new certificates for the seed cluster Restoring and updating the SeedGenerator CR on the seed cluster Prerequisites You have configured a shared container directory on the seed cluster. You have installed the minimum version of the OADP Operator and the Lifecycle Agent on the seed cluster. Ensure that persistent volumes are not configured on the seed cluster. Ensure that the LocalVolume CR does not exist on the seed cluster if the Local Storage Operator is used. Ensure that the LVMCluster CR does not exist on the seed cluster if LVM Storage is used. Ensure that the DataProtectionApplication CR does not exist on the seed cluster if OADP is used. Procedure Detach the cluster from the hub to delete any RHACM-specific resources from the seed cluster that must not be in the seed image: Manually detach the seed cluster by running the following command: USD oc delete managedcluster sno-worker-example Wait until the ManagedCluster CR is removed. After the CR is removed, create the proper SeedGenerator CR. The Lifecycle Agent cleans up the RHACM artifacts. If you are using GitOps ZTP, detach your cluster by removing the seed cluster's SiteConfig CR from the kustomization.yaml . If you have a kustomization.yaml file that references multiple SiteConfig CRs, remove your seed cluster's SiteConfig CR from the kustomization.yaml : apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml If you have a kustomization.yaml that references one SiteConfig CR, remove your seed cluster's SiteConfig CR from the kustomization.yaml and add the generators: {} line: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {} Commit the kustomization.yaml changes in your Git repository and push the changes to your repository. The ArgoCD pipeline detects the changes and removes the managed cluster. Create the Secret object so that you can push the seed image to your registry. Create the authentication file by running the following commands: USD MY_USER=myuserid USD AUTHFILE=/tmp/my-auth.json USD podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER} USD base64 -w 0 USD{AUTHFILE} ; echo Copy the output into the seedAuth field in the Secret YAML file named seedgen in the openshift-lifecycle-agent namespace: apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2 1 The Secret resource must have the name: seedgen and namespace: openshift-lifecycle-agent fields. 2 Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images. Apply the Secret by running the following command: USD oc apply -f secretseedgenerator.yaml Create the SeedGenerator CR: apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2 1 The SeedGenerator CR must be named seedimage . 2 Specify the container image URL, for example, quay.io/example/seed-container-image:<tag> . It is recommended to use the <seed_cluster_name>:<ocp_version> format. Generate the seed image by running the following command: USD oc apply -f seedgenerator.yaml Important The cluster reboots and loses API capabilities while the Lifecycle Agent generates the seed image. Applying the SeedGenerator CR stops the kubelet and the CRI-O operations, then it starts the image generation. If you want to generate more seed images, you must provision a new seed cluster with the version that you want to generate a seed image from. Verification After the cluster recovers and it is available, you can check the status of the SeedGenerator CR by running the following command: USD oc get seedgenerator -o yaml Example output status: conditions: - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "False" type: SeedGenInProgress - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "True" type: SeedGenCompleted 1 observedGeneration: 1 1 The seed image generation is complete. Additional resources Configuring a shared container partition between ostree stateroots Configuring a shared container partition between ostree stateroots when using GitOps ZTP 15.2.4. Creating ConfigMap objects for the image-based upgrade with the Lifecycle Agent The Lifecycle Agent needs all your OADP resources, extra manifests, and custom catalog sources wrapped in a ConfigMap object to process them for the image-based upgrade. 15.2.4.1. Creating OADP ConfigMap objects for the image-based upgrade with Lifecycle Agent Create your OADP resources that are used to back up and restore your resources during the upgrade. Prerequisites Generate a seed image from a compatible seed cluster. Create OADP backup and restore resources. Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information about, see "Configuring a shared container partition for the image-based upgrade". Deploy a version of Lifecycle Agent that is compatible with the version used with the seed image. Install the OADP Operator, the DataProtectionApplication CR, and its secret on the target cluster. Create an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "About installing OADP". Procedure Create the OADP Backup and Restore CRs for platform artifacts in the same namespace where the OADP Operator is installed, which is openshift-adp . If the target cluster is managed by RHACM, add the following YAML file for backing up and restoring RHACM artifacts: PlatformBackupRestore.yaml for RHACM apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "1" spec: backupName: acm-klusterlet 1 If your multiclusterHub CR does not have .spec.imagePullSecret defined and the secret does not exist on the open-cluster-management-agent namespace in your hub cluster, remove v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials . If you created persistent volumes on your cluster through LVM Storage, add the following YAML file for LVM Storage artifacts: PlatformBackupRestoreLvms.yaml for LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "2" 1 spec: backupName: lvmcluster 1 The lca.openshift.io/apply-wave value must be lower than the values specified in the application Restore CRs. If you need to restore applications after the upgrade, create the OADP Backup and Restore CRs for your application in the openshift-adp namespace. Create the OADP CRs for cluster-scoped application artifacts in the openshift-adp namespace. Example OADP CRs for cluster-scoped application artifacts for LSO and LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" 2 spec: backupName: backup-app-cluster-resources 1 Replace the example resource name with your actual resources. 2 The lca.openshift.io/apply-wave value must be higher than the value in the platform Restore CRs and lower than the value in the application namespace-scoped Restore CR. Create the OADP CRs for your namespace-scoped application artifacts. Example OADP CRs namespace-scoped application artifacts when LSO is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app 1 Define custom resources for your application. Example OADP CRs namespace-scoped application artifacts when LVM Storage is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5 1 Define custom resources for your application. 2 Required field. 3 Required field 4 Optional if you use LVM Storage volume snapshots. 5 Required field. Important The same version of the applications must function on both the current and the target release of OpenShift Container Platform. Create the ConfigMap object for your OADP CRs by running the following command: USD oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"oadpContent": [{"name": "oadp-cm-example", "namespace": "openshift-adp"}]}}' \ --type=merge -n openshift-lifecycle-agent Additional resources Configuring a shared container partition between ostree stateroots About installing OADP 15.2.4.2. Creating ConfigMap objects of extra manifests for the image-based upgrade with Lifecycle Agent Create additional manifests that you want to apply to the target cluster. Procedure Create a YAML file that contains your extra manifests, such as SR-IOV. Example SR-IOV resources apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: "example-sriov-node-policy" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: "" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "example-sriov-network" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: "on" trust: "off" Create the ConfigMap object by running the following command: USD oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"extraManifests": [{"name": "example-extra-manifests-cm", "namespace": "openshift-lifecycle-agent"}]}}' \ --type=merge -n openshift-lifecycle-agent 15.2.4.3. Creating ConfigMap objects of custom catalog sources for the image-based upgrade with Lifecycle Agent You can keep your custom catalog sources after the upgrade by generating a ConfigMap object for your catalog sources and adding them to the spec.extraManifest field in the ImageBasedUpgrade CR. For more information about catalog sources, see "Catalog source". Procedure Create a YAML file that contains the CatalogSource CR: apiVersion: operators.coreos.com/v1 kind: CatalogSource metadata: name: example-catalogsources namespace: openshift-marketplace spec: sourceType: grpc displayName: disconnected-redhat-operators image: quay.io/example-org/example-catalog:v1 Create the ConfigMap object by running the following command: USD oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"extraManifests": [{"name": "example-catalogsources-cm", "namespace": "openshift-lifecycle-agent"}]}}' \ --type=merge -n openshift-lifecycle-agent Additional resources Catalog source Performing an image-based upgrade for single-node OpenShift with Lifecycle Agent 15.2.5. Creating ConfigMap objects for the image-based upgrade with the Lifecycle Agent using GitOps ZTP Create your OADP resources, extra manifests, and custom catalog sources wrapped in a ConfigMap object to prepare for the image-based upgrade. 15.2.5.1. Creating OADP resources for the image-based upgrade with GitOps ZTP Prepare your OADP resources to restore your application after an upgrade. Prerequisites Provision one or more managed clusters with GitOps ZTP. Log in as a user with cluster-admin privileges. Generate a seed image from a compatible seed cluster. Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container partition between ostree stateroots when using GitOps ZTP". Deploy a version of Lifecycle Agent that is compatible with the version used with the seed image. Install the OADP Operator, the DataProtectionApplication CR, and its secret on the target cluster. Create an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "Installing and configuring the OADP Operator with GitOps ZTP". The openshift-adp namespace for the OADP ConfigMap object must exist on all managed clusters and the hub cluster for the OADP ConfigMap to be generated and copied to the clusters. Procedure Ensure that your Git repository that you use with the ArgoCD policies application contains the following directory structure: ├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ... ├── kustomization.yaml The source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml file is provided in the ZTP container image. PlatformBackupRestoreWithIBGU.yaml apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "1" spec: backupName: acm-klusterlet 1 If your multiclusterHub CR does not have .spec.imagePullSecret defined and the secret does not exist on the open-cluster-management-agent namespace in your hub cluster, remove v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials . Note If you perform the image-based upgrade directly on managed clusters, use the PlatformBackupRestore.yaml file. If you use LVM Storage to create persistent volumes, you can use the source-crs/ibu/PlatformBackupRestoreLvms.yaml provided in the ZTP container image to back up your LVM Storage resources. PlatformBackupRestoreLvms.yaml apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "2" 1 spec: backupName: lvmcluster 1 The lca.openshift.io/apply-wave value must be lower than the values specified in the application Restore CRs. If you need to restore applications after the upgrade, create the OADP Backup and Restore CRs for your application in the openshift-adp namespace: Create the OADP CRs for cluster-scoped application artifacts in the openshift-adp namespace: Example OADP CRs for cluster-scoped application artifacts for LSO and LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" 2 spec: backupName: backup-app-cluster-resources 1 Replace the example resource name with your actual resources. 2 The lca.openshift.io/apply-wave value must be higher than the value in the platform Restore CRs and lower than the value in the application namespace-scoped Restore CR. Create the OADP CRs for your namespace-scoped application artifacts in the source-crs/custom-crs directory: Example OADP CRs namespace-scoped application artifacts when LSO is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app 1 Define custom resources for your application. Example OADP CRs namespace-scoped application artifacts when LVM Storage is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5 1 Define custom resources for your application. 2 Required field. 3 Required field 4 Optional if you use LVM Storage volume snapshots. 5 Required field. Important The same version of the applications must function on both the current and the target release of OpenShift Container Platform. Create a kustomization.yaml with the following content: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: 1 - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp 2 generatorOptions: disableNameSuffixHash: true 1 Creates the oadp-cm ConfigMap object on the hub cluster with Backup and Restore CRs. 2 The namespace must exist on all managed clusters and the hub cluster for the OADP ConfigMap to be generated and copied to the clusters. Push the changes to your Git repository. Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Installing and configuring the OADP Operator with GitOps ZTP 15.2.5.2. Labeling extra manifests for the image-based upgrade with GitOps ZTP Label your extra manifests so that the Lifecycle Agent can extract resources that are labeled with the lca.openshift.io/target-ocp-version: <target_version> label. Prerequisites Provision one or more managed clusters with GitOps ZTP. Log in as a user with cluster-admin privileges. Generate a seed image from a compatible seed cluster. Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container directory between ostree stateroots when using GitOps ZTP". Deploy a version of Lifecycle Agent that is compatible with the version used with the seed image. Procedure Label your required extra manifests with the lca.openshift.io/target-ocp-version: <target_version> label in your existing site PolicyGenTemplate CR: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: "example-sno" du-profile: "4.15" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" 1 spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: ["ens5f0"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: ["ens7f0"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml 2 policyName: "config-policy" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: "4.15" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1 1 Ensure that the lca.openshift.io/target-ocp-version label matches either the y-stream or the z-stream of the target OpenShift Container Platform version that is specified in the spec.seedImageRef.version field of the ImageBasedUpgrade CR. The Lifecycle Agent only applies the CRs that match the specified version. 2 If you do not want to use custom catalog sources, remove this entry. Push the changes to your Git repository. Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP 15.2.6. Configuring the automatic image cleanup of the container storage disk Configure when the Lifecycle Agent cleans up unpinned images in the Prep stage by setting a minimum threshold for available storage space through annotations. The default container storage disk usage threshold is 50%. The Lifecycle Agent does not delete images that are pinned in CRI-O or are currently used. The Operator selects the images for deletion by starting with dangling images and then sorting the images from oldest to newest that is determined by the image Created timestamp. 15.2.6.1. Configuring the automatic image cleanup of the container storage disk Configure the minimum threshold for available storage space through annotations. Prerequisites Create an ImageBasedUpgrade CR. Procedure Increase the threshold to 65% by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent='65' (Optional) Remove the threshold override by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent- 15.2.6.2. Disable the automatic image cleanup of the container storage disk Disable the automatic image cleanup threshold. Procedure Disable the automatic image cleanup by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep='Disabled' (Optional) Enable automatic image cleanup again by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep- 15.3. Performing an image-based upgrade for single-node OpenShift clusters with the Lifecycle Agent You can use the Lifecycle Agent to do a manual image-based upgrade of a single-node OpenShift cluster. When you deploy the Lifecycle Agent on a cluster, an ImageBasedUpgrade CR is automatically created. You update this CR to specify the image repository of the seed image and to move through the different stages. 15.3.1. Moving to the Prep stage of the image-based upgrade with Lifecycle Agent When you deploy the Lifecycle Agent on a cluster, an ImageBasedUpgrade custom resource (CR) is automatically created. After you created all the resources that you need during the upgrade, you can move on to the Prep stage. For more information, see the "Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent" section. Note In a disconnected environment, if the seed cluster's release image registry is different from the target cluster's release image registry, you must create an ImageDigestMirrorSet (IDMS) resource to configure alternative mirrored repository locations. For more information, see "Configuring image registry repository mirroring". You can retrieve the release registry used in the seed image by running the following command: USD skopeo inspect docker://<imagename> | jq -r '.Labels."com.openshift.lifecycle-agent.seed_cluster_info" | fromjson | .release_registry' Prerequisites You have created resources to back up and restore your clusters. Procedure Check that you have patched your ImageBasedUpgrade CR: apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 1 image: <seed_container_image> 2 pullSecretRef: <seed_pull_secret> 3 autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 4 extraManifests: 5 - name: example-extra-manifests-cm namespace: openshift-lifecycle-agent - name: example-catalogsources-cm namespace: openshift-lifecycle-agent oadpContent: 6 - name: oadp-cm-example namespace: openshift-adp 1 Specify the target platform version. The value must match the version of the seed image. 2 Specify the repository where the target cluster can pull the seed image from. 3 Specify the reference to a secret with credentials to pull container images if the images are in a private registry. 4 (Optional) Specify the time frame in seconds to roll back if the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. 5 (Optional) Specify the list of ConfigMap resources that contain your custom catalog sources to retain after the upgrade and your extra manifests to apply to the target cluster that are not part of the seed image. 6 Add the oadpContent section with the OADP ConfigMap information. To start the Prep stage, change the value of the stage field to Prep in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Prep"}}' --type=merge -n openshift-lifecycle-agent If you provide ConfigMap objects for OADP resources and extra manifests, Lifecycle Agent validates the specified ConfigMap objects during the Prep stage. You might encounter the following issues: Validation warnings or errors if the Lifecycle Agent detects any issues with the extraManifests parameters. Validation errors if the Lifecycle Agent detects any issues with the oadpContent parameters. Validation warnings do not block the Upgrade stage but you must decide if it is safe to proceed with the upgrade. These warnings, for example missing CRDs, namespaces, or dry run failures, update the status.conditions for the Prep stage and annotation fields in the ImageBasedUpgrade CR with details about the warning. Example validation warning [...] metadata: annotations: extra-manifest.lca.openshift.io/validation-warning: '...' [...] However, validation errors, such as adding MachineConfig or Operator manifests to extra manifests, cause the Prep stage to fail and block the Upgrade stage. When the validations pass, the cluster creates a new ostree stateroot, which involves pulling and unpacking the seed image, and running host-level commands. Finally, all the required images are precached on the target cluster. Verification Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 13 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 13 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep stage completed successfully observedGeneration: 13 reason: Completed status: "True" type: PrepCompleted observedGeneration: 13 validNextStages: - Idle - Upgrade Additional resources Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent Configuring image registry repository mirroring 15.3.2. Moving to the Upgrade stage of the image-based upgrade with Lifecycle Agent After you generate the seed image and complete the Prep stage, you can upgrade the target cluster. During the upgrade process, the OADP Operator creates a backup of the artifacts specified in the OADP custom resources (CRs), then the Lifecycle Agent upgrades the cluster. If the upgrade fails or stops, an automatic rollback is initiated. If you have an issue after the upgrade, you can initiate a manual rollback. For more information about manual rollback, see "Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent". Prerequisites Complete the Prep stage. Procedure To move to the Upgrade stage, change the value of the stage field to Upgrade in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Upgrade"}}' --type=merge Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output status: conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 5 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 5 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed successfully observedGeneration: 5 reason: Completed status: "True" type: PrepCompleted - lastTransitionTime: "2024-01-01T09:00:00Z" message: |- Waiting for system to stabilize: one or more health checks failed - one or more ClusterOperators not yet ready: authentication - one or more MachineConfigPools not yet ready: master - one or more ClusterServiceVersions not yet ready: sriov-fec.v2.8.0 observedGeneration: 1 reason: InProgress status: "True" type: UpgradeInProgress observedGeneration: 1 rollbackAvailabilityExpiration: "2024-05-19T14:01:52Z" validNextStages: - Rollback The OADP Operator creates a backup of the data specified in the OADP Backup and Restore CRs and the target cluster reboots. Monitor the status of the CR by running the following command: USD oc get ibu -o yaml If you are satisfied with the upgrade, finalize the changes by patching the value of the stage field to Idle in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge Important You cannot roll back the changes once you move to the Idle stage after an upgrade. The Lifecycle Agent deletes all resources created during the upgrade process. You can remove the OADP Operator and its configuration files after a successful upgrade. For more information, see "Deleting Operators from a cluster". Verification Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output status: conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 5 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 5 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed successfully observedGeneration: 5 reason: Completed status: "True" type: PrepCompleted - lastTransitionTime: "2024-01-01T09:00:00Z" message: Upgrade completed observedGeneration: 1 reason: Completed status: "False" type: UpgradeInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Upgrade completed observedGeneration: 1 reason: Completed status: "True" type: UpgradeCompleted observedGeneration: 1 rollbackAvailabilityExpiration: "2024-01-01T09:00:00Z" validNextStages: - Idle - Rollback Check the status of the cluster restoration by running the following command: USD oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason Example output NAME Status Reason acm-klusterlet Completed <none> 1 apache-app Completed <none> localvolume Completed <none> 1 The acm-klusterlet is specific to RHACM environments only. Additional resources Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent Deleting Operators from a cluster 15.3.3. Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent An automatic rollback is initiated if the upgrade does not complete within the time frame specified in the initMonitorTimeoutSeconds field after rebooting. Example ImageBasedUpgrade CR apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 image: <seed_container_image> autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 1 [...] 1 (Optional) Specify the time frame in seconds to roll back if the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. You can manually roll back the changes if you encounter unresolvable issues after an upgrade. Prerequisites Log in to the hub cluster as a user with cluster-admin privileges. Ensure that the control plane certificates on the original stateroot are valid. If the certificates expired, see "Recovering from expired control plane certificates". Procedure To move to the rollback stage, patch the value of the stage field to Rollback in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Rollback"}}' --type=merge The Lifecycle Agent reboots the cluster with the previously installed version of OpenShift Container Platform and restores the applications. If you are satisfied with the changes, finalize the rollback by patching the value of the stage field to Idle in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge -n openshift-lifecycle-agent Warning If you move to the Idle stage after a rollback, the Lifecycle Agent cleans up resources that can be used to troubleshoot a failed upgrade. Additional resources Recovering from expired control plane certificates 15.3.4. Troubleshooting image-based upgrades with Lifecycle Agent Perform troubleshooting steps on the managed clusters that are affected by an issue. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. 15.3.4.1. Collecting logs You can use the oc adm must-gather CLI to collect information for debugging and troubleshooting. Procedure Collect data about the Operators by running the following command: USD oc adm must-gather \ --dest-dir=must-gather/tmp \ --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == "manager")].image}') \ --image=quay.io/konveyor/oadp-must-gather:latest \ 1 --image=quay.io/openshift/origin-must-gather:latest 2 1 (Optional) You can add this options if you need to gather more information from the OADP Operator. 2 (Optional) You can add this options if you need to gather more information from the SR-IOV Operator. 15.3.4.2. AbortFailed or FinalizeFailed error Issue During the finalize stage or when you stop the process at the Prep stage, Lifecycle Agent cleans up the following resources: Stateroot that is no longer required Precaching resources OADP CRs ImageBasedUpgrade CR If the Lifecycle Agent fails to perform the above steps, it transitions to the AbortFailed or FinalizeFailed states. The condition message and log show which steps failed. Example error message message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: "False" type: Idle Resolution Inspect the logs to determine why the failure occurred. To prompt Lifecycle Agent to retry the cleanup, add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. After observing this annotation, Lifecycle Agent retries the cleanup and, if it is successful, the ImageBasedUpgrade stage transitions to Idle . If the cleanup fails again, you can manually clean up the resources. 15.3.4.2.1. Cleaning up stateroot manually Issue Stopping at the Prep stage, Lifecycle Agent cleans up the new stateroot. When finalizing after a successful upgrade or a rollback, Lifecycle Agent cleans up the old stateroot. If this step fails, it is recommended that you inspect the logs to determine why the failure occurred. Resolution Check if there are any existing deployments in the stateroot by running the following command: USD ostree admin status If there are any, clean up the existing deployment by running the following command: USD ostree admin undeploy <index_of_deployment> After cleaning up all the deployments of the stateroot, wipe the stateroot directory by running the following commands: Warning Ensure that the booted deployment is not in this stateroot. USD stateroot="<stateroot_to_delete>" USD unshare -m /bin/sh -c "mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}" 15.3.4.2.2. Cleaning up OADP resources manually Issue Automatic cleanup of OADP resources can fail due to connection issues between Lifecycle Agent and the S3 backend. By restoring the connection and adding the lca.openshift.io/manual-cleanup-done annotation, the Lifecycle Agent can successfully cleanup backup resources. Resolution Check the backend connectivity by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true Remove all backup resources and then add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. 15.3.4.3. LVM Storage volume contents not restored When LVM Storage is used to provide dynamic persistent volume storage, LVM Storage might not restore the persistent volume contents if it is configured incorrectly. 15.3.4.3.1. Missing LVM Storage-related fields in Backup CR Issue Your Backup CRs might be missing fields that are needed to restore your persistent volumes. You can check for events in your application pod to determine if you have this issue by running the following: USD oc describe pod <your_app_name> Example output showing missing LVM Storage-related fields in Backup CR Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume "pvc-1234" : rpc error: code = Unknown desc = VolumeID is not found Resolution You must include logicalvolumes.topolvm.io in the application Backup CR. Without this resource, the application restores its persistent volume claims and persistent volume manifests correctly, however, the logicalvolume associated with this persistent volume is not restored properly after pivot. Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io 1 To restore the persistent volumes for your application, you must configure this section as shown. 15.3.4.3.2. Missing LVM Storage-related fields in Restore CR Issue The expected resources for the applications are restored but the persistent volume contents are not preserved after upgrading. List the persistent volumes for you applications by running the following command before pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output before pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m List the persistent volumes for you applications by running the following command after pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output after pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s Resolution The reason for this issue is that the logicalvolume status is not preserved in the Restore CR. This status is important because it is required for Velero to reference the volumes that must be preserved after pivoting. You must include the following fields in the application Restore CR: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes 1 To preserve the persistent volumes for your application, you must set restorePVs to true . 2 To preserve the persistent volumes for your application, you must configure this section as shown. 15.3.4.4. Debugging failed Backup and Restore CRs Issue The backup or restoration of artifacts failed. Resolution You can debug Backup and Restore CRs and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Describe the Backup CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details Describe the Restore CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details Download the backed up resources to a local directory by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz 15.4. Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP You can use a single resource on the hub cluster, the ImageBasedGroupUpgrade custom resource (CR), to manage an imaged-based upgrade on a selected group of managed clusters through all stages. Topology Aware Lifecycle Manager (TALM) reconciles the ImageBasedGroupUpgrade CR and creates the underlying resources to complete the defined stage transitions, either in a manually controlled or a fully automated upgrade flow. For more information about the image-based upgrade, see "Understanding the image-based upgrade for single-node OpenShift clusters". Additional resources Understanding the image-based upgrade for single-node OpenShift clusters 15.4.1. Managing the image-based upgrade at scale using the ImageBasedGroupUpgrade CR on the hub The ImageBasedGroupUpgrade CR combines the ImageBasedUpgrade and ClusterGroupUpgrade APIs. For example, you can define the cluster selection and rollout strategy with the ImageBasedGroupUpgrade API in the same way as the ClusterGroupUpgrade API. The stage transitions are different from the ImageBasedUpgrade API. The ImageBasedGroupUpgrade API allows you to combine several stage transitions, also called actions, into one step that share a rollout strategy. Example ImageBasedGroupUpgrade.yaml apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. 6 Number of clusters to update in a batch. 7 Timeout limit to complete the action in minutes. 15.4.1.1. Supported action combinations Actions are the list of stage transitions that the TALM completes in the steps of an upgrade plan for the selected group of clusters. Each action entry in the ImageBasedGroupUpgrade CR is a separate step and a step contains one or several actions that share the same rollout strategy. You can achieve more control over the rollout strategy for each action by separating actions into steps. These actions can be combined differently in your upgrade plan and you can add subsequent steps later. Wait until the steps either complete or fail before adding a step to your plan. The first action of an added step for clusters that failed a steps must be either Abort or Rollback . Important You cannot remove actions or steps from an ongoing plan. The following table shows example plans for different levels of control over the rollout strategy: Table 15.5. Example upgrade plans Example plan Description plan: - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 60 All actions share the same strategy plan: - actions: ["Prep", "Upgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: ["FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 500 timeout: 10 Some actions share the same strategy plan: - actions: ["Prep"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: ["Upgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 20 - actions: ["FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 500 timeout: 10 All actions have different strategies Important Clusters that fail one of the actions will skip the remaining actions in the same step. The ImageBasedGroupUpgrade API accepts the following actions: Prep Start preparing the upgrade resources by moving to the Prep stage. Upgrade Start the upgrade by moving to the Upgrade stage. FinalizeUpgrade Finalize the upgrade on selected clusters that completed the Upgrade action by moving to the Idle stage. Rollback Start a rollback only on successfully upgraded clusters by moving to the Rollback stage. FinalizeRollback Finalize the rollback by moving to the Idle stage. AbortOnFailure Cancel the upgrade on selected clusters that failed the Prep or Upgrade actions by moving to the Idle stage. Abort Cancel an ongoing upgrade only on clusters that are not yet upgraded by moving to the Idle stage. The following action combinations are supported. A pair of brackets signifies one step in the plan section: ["Prep"] , ["Abort"] ["Prep", "Upgrade", "FinalizeUpgrade"] ["Prep"] , ["AbortOnFailure"] , ["Upgrade"] , ["AbortOnFailure"] , ["FinalizeUpgrade"] ["Rollback", "FinalizeRollback"] Use one of the following combinations to resume or cancel an ongoing upgrade from a completely new ImageBasedGroupUpgrade CR: ["Upgrade","FinalizeUpgrade"] ["FinalizeUpgrade"] ["FinalizeRollback"] ["Abort"] ["AbortOnFailure"] 15.4.1.2. Labeling for cluster selection Use the spec.clusterLabelSelectors field for initial cluster selection. In addition, the TALM labels the managed clusters according to the results of their last stage transition. When a stage completes or fails, the TALM marks the relevant clusters with the following labels: lcm.openshift.io/ibgu-<stage>-completed lcm.openshift.io/ibgu-<stage>-failed Use these cluster labels to cancel or roll back an upgrade on a group of clusters after troubleshooting issues that you might encounter. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. For example, if you want to cancel the upgrade for all managed clusters except for clusters that successfully completed the upgrade, you can add an Abort action to your plan. The Abort action moves back the ImageBasedUpgrade CR to the Idle stage, which cancels the upgrade on clusters that are not yet upgraded. Adding a separate Abort action ensures that the TALM does not perform the Abort action on clusters that have the lcm.openshift.io/ibgu-upgrade-completed label. The cluster labels are removed after successfully canceling or finalizing the upgrade. 15.4.1.3. Status monitoring The ImageBasedGroupUpgrade CR ensures a better monitoring experience with a comprehensive status reporting for all clusters that is aggregated in one place. You can monitor the following actions: status.clusters.completedActions Shows all completed actions defined in the plan section. status.clusters.currentAction Shows all actions that are currently in progress. status.clusters.failedActions Shows all failed actions along with a detailed error message. 15.4.2. Performing an image-based upgrade on managed clusters at scale in several steps For use cases when you need better control of when the upgrade interrupts your service, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with adding actions after the step is complete. After evaluating the results of the steps, you can move to the upgrade stage or troubleshoot any failed steps throughout the procedure. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. You have created policies and ConfigMap objects for resources used in the image-based upgrade. You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster. Procedure Create a YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep"] rolloutStrategy: maxConcurrency: 2 timeout: 2400 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. Apply created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Monitor the status updates by running the following command on the hub cluster: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6 # ... The output of an example plan starts with the Prep stage only and you add actions to the plan based on the results of the step. TALM adds a label to the clusters to mark if the upgrade succeeded or failed. For example, the lcm.openshift.io/ibgu-prep-failed is applied to clusters that failed the Prep stage. After investigating the failure, you can add the AbortOnFailure step to your upgrade plan. It moves the clusters labeled with lcm.openshift.io/ibgu-<action>-failed back to the Idle stage. Any resources that are related to the upgrade on the selected clusters are deleted. Optional: Add the AbortOnFailure action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]' Continue monitoring the status updates by running the following command: USD oc get ibgu -o yaml Add the action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["Upgrade"], "rolloutStrategy": {"maxConcurrency": 2, "timeout": 30}}}]' Optional: Add the AbortOnFailure action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]' Continue monitoring the status updates by running the following command: USD oc get ibgu -o yaml Add the action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["FinalizeUpgrade"], "rolloutStrategy": {"maxConcurrency": 10, "timeout": 3}}}]' Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6 # ... Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent using GitOps ZTP About backup and snapshot locations and their secrets Creating a Backup CR Creating a Restore CR Supported action combinations 15.4.3. Performing an image-based upgrade on managed clusters at scale in one step For use cases when service interruption is not a concern, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with several actions combined in one step with one rollout strategy. With one rollout strategy, the upgrade time can be reduced but you can only troubleshoot failed clusters after the upgrade plan is complete. Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. You have created policies and ConfigMap objects for resources used in the image-based upgrade. You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster. Procedure Create a YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. 6 Number of clusters to update in a batch. 7 Timeout limit to complete the action in minutes. Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6 # ... 15.4.4. Canceling an image-based upgrade on managed clusters at scale You can cancel the upgrade on a set of managed clusters that completed the Prep stage. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create a separate YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Abort"] rolloutStrategy: maxConcurrency: 5 timeout: 10 All managed clusters that completed the Prep stage are moved back to the Idle stage. Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4 # ... Additional resources Supported action combinations 15.4.5. Rolling back an image-based upgrade on managed clusters at scale Roll back the changes on a set of managed clusters if you encounter unresolvable issues after a successful upgrade. You need to create a separate ImageBasedGroupUpgrade CR and define the set of managed clusters that you want to roll back. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create a separate YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Rollback", "FinalizeRollback"] rolloutStrategy: maxConcurrency: 200 timeout: 2400 Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml All managed clusters that match the defined labels are moved back to the Rollback and then the Idle stages to finalize the rollback. Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4 # ... Additional resources Supported action combinations Recovering from expired control plane certificates 15.4.6. Troubleshooting image-based upgrades with Lifecycle Agent Perform troubleshooting steps on the managed clusters that are affected by an issue. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. 15.4.6.1. Collecting logs You can use the oc adm must-gather CLI to collect information for debugging and troubleshooting. Procedure Collect data about the Operators by running the following command: USD oc adm must-gather \ --dest-dir=must-gather/tmp \ --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == "manager")].image}') \ --image=quay.io/konveyor/oadp-must-gather:latest \ 1 --image=quay.io/openshift/origin-must-gather:latest 2 1 (Optional) You can add this options if you need to gather more information from the OADP Operator. 2 (Optional) You can add this options if you need to gather more information from the SR-IOV Operator. 15.4.6.2. AbortFailed or FinalizeFailed error Issue During the finalize stage or when you stop the process at the Prep stage, Lifecycle Agent cleans up the following resources: Stateroot that is no longer required Precaching resources OADP CRs ImageBasedUpgrade CR If the Lifecycle Agent fails to perform the above steps, it transitions to the AbortFailed or FinalizeFailed states. The condition message and log show which steps failed. Example error message message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: "False" type: Idle Resolution Inspect the logs to determine why the failure occurred. To prompt Lifecycle Agent to retry the cleanup, add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. After observing this annotation, Lifecycle Agent retries the cleanup and, if it is successful, the ImageBasedUpgrade stage transitions to Idle . If the cleanup fails again, you can manually clean up the resources. 15.4.6.2.1. Cleaning up stateroot manually Issue Stopping at the Prep stage, Lifecycle Agent cleans up the new stateroot. When finalizing after a successful upgrade or a rollback, Lifecycle Agent cleans up the old stateroot. If this step fails, it is recommended that you inspect the logs to determine why the failure occurred. Resolution Check if there are any existing deployments in the stateroot by running the following command: USD ostree admin status If there are any, clean up the existing deployment by running the following command: USD ostree admin undeploy <index_of_deployment> After cleaning up all the deployments of the stateroot, wipe the stateroot directory by running the following commands: Warning Ensure that the booted deployment is not in this stateroot. USD stateroot="<stateroot_to_delete>" USD unshare -m /bin/sh -c "mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}" 15.4.6.2.2. Cleaning up OADP resources manually Issue Automatic cleanup of OADP resources can fail due to connection issues between Lifecycle Agent and the S3 backend. By restoring the connection and adding the lca.openshift.io/manual-cleanup-done annotation, the Lifecycle Agent can successfully cleanup backup resources. Resolution Check the backend connectivity by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true Remove all backup resources and then add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. 15.4.6.3. LVM Storage volume contents not restored When LVM Storage is used to provide dynamic persistent volume storage, LVM Storage might not restore the persistent volume contents if it is configured incorrectly. 15.4.6.3.1. Missing LVM Storage-related fields in Backup CR Issue Your Backup CRs might be missing fields that are needed to restore your persistent volumes. You can check for events in your application pod to determine if you have this issue by running the following: USD oc describe pod <your_app_name> Example output showing missing LVM Storage-related fields in Backup CR Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume "pvc-1234" : rpc error: code = Unknown desc = VolumeID is not found Resolution You must include logicalvolumes.topolvm.io in the application Backup CR. Without this resource, the application restores its persistent volume claims and persistent volume manifests correctly, however, the logicalvolume associated with this persistent volume is not restored properly after pivot. Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io 1 To restore the persistent volumes for your application, you must configure this section as shown. 15.4.6.3.2. Missing LVM Storage-related fields in Restore CR Issue The expected resources for the applications are restored but the persistent volume contents are not preserved after upgrading. List the persistent volumes for you applications by running the following command before pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output before pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m List the persistent volumes for you applications by running the following command after pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output after pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s Resolution The reason for this issue is that the logicalvolume status is not preserved in the Restore CR. This status is important because it is required for Velero to reference the volumes that must be preserved after pivoting. You must include the following fields in the application Restore CR: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes 1 To preserve the persistent volumes for your application, you must set restorePVs to true . 2 To preserve the persistent volumes for your application, you must configure this section as shown. 15.4.6.4. Debugging failed Backup and Restore CRs Issue The backup or restoration of artifacts failed. Resolution You can debug Backup and Restore CRs and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Describe the Backup CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details Describe the Restore CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details Download the backed up resources to a local directory by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz
[ "apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage spec: seedImage: <seed_image>", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle 1 seedImageRef: 2 version: <target_version> image: <seed_container_image> pullSecretRef: name: <seed_pull_secret> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 3 extraManifests: 4 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 5 - name: oadp-cm-example namespace: openshift-adp", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet namespace: openshift-adp annotations: lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet 1 labels: velero.io/storage-location: default spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - clusterroles includedNamespaceScopedResources: - deployments", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: annotations: lca.openshift.io/target-ocp-version-manifest-count: \"5\" name: upgrade", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service After=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - hostName: <name> ignitionConfigOverride: '{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}' [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]'", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management", "oc create -f lcao-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent", "oc create -f lcao-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f lcao-subscription.yaml", "oc get csv -n openshift-lifecycle-agent", "NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.16.0 Openshift Lifecycle Agent 4.16.0 Succeeded", "oc get deploy -n openshift-lifecycle-agent", "NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: LcaSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscription.yaml policyName: \"subscriptions-policy\" [...]", "apiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-adp", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-adp", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded", "├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: OadpOperatorStatus.yaml policyName: \"subscriptions-policy\" [...]", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: configuration: restic: enable: false 1 velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: \"default\" region: minio s3Url: USDurl insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: USDbucketName 2 prefix: USDprefixName 3 status: conditions: - reason: Complete status: \"True\" type: Reconciled", "apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" type: Opaque", "apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" status: phase: Available", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-cnf\" namespace: \"ztp-site\" spec: bindingRules: sites: \"example-cnf\" du-profile: \"latest\" mcp: \"master\" sourceFiles: - fileName: OadpSecret.yaml policyName: \"config-policy\" data: cloud: <your_credentials> 1 - fileName: DataProtectionApplication.yaml policyName: \"config-policy\" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> 2 profile: \"default\" insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> 3 prefix: <cluster_name> 4 - fileName: OadpBackupStorageLocationStatus.yaml policyName: \"config-policy\"", "oc delete managedcluster sno-worker-example", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {}", "MY_USER=myuserid AUTHFILE=/tmp/my-auth.json podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER}", "base64 -w 0 USD{AUTHFILE} ; echo", "apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2", "oc apply -f secretseedgenerator.yaml", "apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2", "oc apply -f seedgenerator.yaml", "oc get seedgenerator -o yaml", "status: conditions: - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"False\" type: SeedGenInProgress - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"True\" type: SeedGenCompleted 1 observedGeneration: 1", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5", "oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"oadpContent\": [{\"name\": \"oadp-cm-example\", \"namespace\": \"openshift-adp\"}]}}' --type=merge -n openshift-lifecycle-agent", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: \"example-sriov-node-policy\" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: \"\" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"example-sriov-network\" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: \"on\" trust: \"off\"", "oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-extra-manifests-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: CatalogSource metadata: name: example-catalogsources namespace: openshift-marketplace spec: sourceType: grpc displayName: disconnected-redhat-operators image: quay.io/example-org/example-catalog:v1", "oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-catalogsources-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent", "├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ├── kustomization.yaml", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: 1 - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp 2 generatorOptions: disableNameSuffixHash: true", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: \"example-sno\" du-profile: \"4.15\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" 1 spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: [\"ens5f0\"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [\"ens7f0\"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml 2 policyName: \"config-policy\" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent='65'", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent-", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep='Disabled'", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep-", "skopeo inspect docker://<imagename> | jq -r '.Labels.\"com.openshift.lifecycle-agent.seed_cluster_info\" | fromjson | .release_registry'", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 1 image: <seed_container_image> 2 pullSecretRef: <seed_pull_secret> 3 autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 4 extraManifests: 5 - name: example-extra-manifests-cm namespace: openshift-lifecycle-agent - name: example-catalogsources-cm namespace: openshift-lifecycle-agent oadpContent: 6 - name: oadp-cm-example namespace: openshift-adp", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Prep\"}}' --type=merge -n openshift-lifecycle-agent", "[...] metadata: annotations: extra-manifest.lca.openshift.io/validation-warning: '...' [...]", "oc get ibu -o yaml", "conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 13 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 13 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep stage completed successfully observedGeneration: 13 reason: Completed status: \"True\" type: PrepCompleted observedGeneration: 13 validNextStages: - Idle - Upgrade", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Upgrade\"}}' --type=merge", "oc get ibu -o yaml", "status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: |- Waiting for system to stabilize: one or more health checks failed - one or more ClusterOperators not yet ready: authentication - one or more MachineConfigPools not yet ready: master - one or more ClusterServiceVersions not yet ready: sriov-fec.v2.8.0 observedGeneration: 1 reason: InProgress status: \"True\" type: UpgradeInProgress observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-05-19T14:01:52Z\" validNextStages: - Rollback", "oc get ibu -o yaml", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge", "oc get ibu -o yaml", "status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"False\" type: UpgradeInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"True\" type: UpgradeCompleted observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-01-01T09:00:00Z\" validNextStages: - Idle - Rollback", "oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason", "NAME Status Reason acm-klusterlet Completed <none> 1 apache-app Completed <none> localvolume Completed <none>", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 image: <seed_container_image> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 1 [...]", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Rollback\"}}' --type=merge", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge -n openshift-lifecycle-agent", "oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\ 1 --image=quay.io/openshift/origin-must-gather:latest 2", "message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle", "ostree admin status", "ostree admin undeploy <index_of_deployment>", "stateroot=\"<stateroot_to_delete>\"", "unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true", "oc describe pod <your_app_name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s", "apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7", "plan: - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60", "plan: - actions: [\"Prep\", \"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10", "plan: - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 20 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 2 timeout: 2400", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'", "oc get ibgu -o yaml", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"Upgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 2, \"timeout\": 30}}}]'", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'", "oc get ibgu -o yaml", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"FinalizeUpgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 10, \"timeout\": 3}}}]'", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Abort\"] rolloutStrategy: maxConcurrency: 5 timeout: 10", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Rollback\", \"FinalizeRollback\"] rolloutStrategy: maxConcurrency: 200 timeout: 2400", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4", "oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\ 1 --image=quay.io/openshift/origin-must-gather:latest 2", "message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle", "ostree admin status", "ostree admin undeploy <index_of_deployment>", "stateroot=\"<stateroot_to_delete>\"", "unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true", "oc describe pod <your_app_name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s", "apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/edge_computing/image-based-upgrade-for-single-node-openshift-clusters
Chapter 132. AutoRestart schema reference
Chapter 132. AutoRestart schema reference Used in: KafkaConnectorSpec , KafkaMirrorMaker2ConnectorSpec Full list of AutoRestart schema properties Configures automatic restarts for connectors and tasks that are in a FAILED state. When enabled, a back-off algorithm applies the automatic restart to each failed connector and its tasks. The operator attempts an automatic restart on reconciliation. If the first attempt fails, the operator makes up to six more attempts. The duration between each restart attempt increases from 2 to 30 minutes. After each restart, failed connectors and tasks transit from FAILED to RESTARTING . If the restart fails after the final attempt, there is likely to be a problem with the connector configuration. The connector and tasks remain in a FAILED state and you have to restart them manually. You can do this by annotating the KafKaConnector custom resource with strimzi.io/restart: "true" . For Kafka Connect connectors, use the autoRestart property of the KafkaConnector resource to enable automatic restarts of failed connectors and tasks. Enabling automatic restarts of failed connectors for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true For MirrorMaker 2, use the autoRestart property of connectors in the KafkaMirrorMaker2 resource to enable automatic restarts of failed connectors and tasks. Enabling automatic restarts of failed connectors for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: mirrors: - sourceConnector: autoRestart: enabled: true # ... heartbeatConnector: autoRestart: enabled: true # ... checkpointConnector: autoRestart: enabled: true # ... 132.1. AutoRestart schema properties Property Description enabled Whether automatic restart for failed connectors and tasks should be enabled or disabled. boolean
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: mirrors: - sourceConnector: autoRestart: enabled: true # heartbeatConnector: autoRestart: enabled: true # checkpointConnector: autoRestart: enabled: true #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-autorestart-reference
11.5. Configuring Static Routes in ifcfg files
11.5. Configuring Static Routes in ifcfg files Static routes set using ip commands at the command prompt will be lost if the system is shutdown or restarted. To configure static routes to be persistent after a system restart, they must be placed in per-interface configuration files in the /etc/sysconfig/network-scripts/ directory. The file name should be of the format route- ifname . There are two types of commands to use in the configuration files; ip commands as explained in Section 11.5.1, "Static Routes Using the IP Command Arguments Format" and the Network/Netmask format as explained in Section 11.5.2, "Network/Netmask Directives Format" . 11.5.1. Static Routes Using the IP Command Arguments Format If required in a per-interface configuration file, for example /etc/sysconfig/network-scripts/route-eth0 , define a route to a default gateway on the first line. This is only required if the gateway is not set via DHCP and is not set globally in the /etc/sysconfig/network file: default via 192.168.1.1 dev interface where 192.168.1.1 is the IP address of the default gateway. The interface is the interface that is connected to, or can reach, the default gateway. The dev option can be omitted, it is optional. Note that this setting takes precedence over a setting in the /etc/sysconfig/network file. If a route to a remote network is required, a static route can be specified as follows. Each line is parsed as an individual route: 10.10.10.0/24 via 192.168.1.1 [ dev interface ] where 10.10.10.0/24 is the network address and prefix length of the remote or destination network. The address 192.168.1.1 is the IP address leading to the remote network. It is preferably the hop address but the address of the exit interface will work. The " hop " means the remote end of a link, for example a gateway or router. The dev option can be used to specify the exit interface interface but it is not required. Add as many static routes as required. The following is an example of a route- interface file using the ip command arguments format. The default gateway is 192.168.0.1 , interface eth0 and a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 network and the 172.16.1.10/32 host: In the above example, packets going to the local 192.168.0.0/24 network will be directed out the interface attached to that network. Packets going to the 10.10.10.0/24 network and 172.16.1.10/32 host will be directed to 192.168.0.10 . Packets to unknown, remote, networks will use the default gateway therefore static routes should only be configured for remote networks or hosts if the default route is not suitable. Remote in this context means any networks or hosts that are not directly attached to the system. Specifying an exit interface is optional. It can be useful if you want to force traffic out of a specific interface. For example, in the case of a VPN, you can force traffic to a remote network to pass through a tun0 interface even when the interface is in a different subnet to the destination network. Important If the default gateway is already assigned from DHCP , the IP command arguments format can cause one of two errors during start-up, or when bringing up an interface from the down state using the ifup command: "RTNETLINK answers: File exists" or 'Error: either "to" is a duplicate, or " X.X.X.X " is a garbage.', where X.X.X.X is the gateway, or a different IP address. These errors can also occur if you have another route to another network using the default gateway. Both of these errors are safe to ignore.
[ "default via 192.168.0.1 dev eth0 10.10.10.0/24 via 192.168.0.10 dev eth0 172.16.1.10/32 via 192.168.0.10 dev eth0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Configuring_Static_Routes_in_ifcfg_files
Using Go 1.21.0 Toolset
Using Go 1.21.0 Toolset Red Hat Developer Tools 1 Installing and using Go 1.21.0 Toolset Jacob Valdez [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.21.0_toolset/index
Chapter 11. Image [config.openshift.io/v1]
Chapter 11. Image [config.openshift.io/v1] Description Image governs policies related to imagestream imports and runtime configuration for external registries. It allows cluster admins to configure which registries OpenShift is allowed to import images from, extra CA trust bundles for external registries, and policies to block or allow registry hostnames. When exposing OpenShift's image registry to the public, this also lets cluster admins specify the external hostname. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 11.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description additionalTrustedCA object additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. allowedRegistriesForImport array allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. allowedRegistriesForImport[] object RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. registrySources object registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. 11.1.2. .spec.additionalTrustedCA Description additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 11.1.3. .spec.allowedRegistriesForImport Description allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. Type array 11.1.4. .spec.allowedRegistriesForImport[] Description RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. Type object Property Type Description domainName string domainName specifies a domain name for the registry In case the registry use non-standard (80 or 443) port, the port should be included in the domain name as well. insecure boolean insecure indicates whether the registry is secure (https) or insecure (http) By default (if not specified) the registry is assumed as secure. 11.1.5. .spec.registrySources Description registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. Type object Property Type Description allowedRegistries array (string) allowedRegistries are the only registries permitted for image pull and push actions. All other registries are denied. Only one of BlockedRegistries or AllowedRegistries may be set. blockedRegistries array (string) blockedRegistries cannot be used for image pull and push actions. All other registries are permitted. Only one of BlockedRegistries or AllowedRegistries may be set. containerRuntimeSearchRegistries array (string) containerRuntimeSearchRegistries are registries that will be searched when pulling images that do not have fully qualified domains in their pull specs. Registries will be searched in the order provided in the list. Note: this search list only works with the container runtime, i.e CRI-O. Will NOT work with builds or imagestream imports. insecureRegistries array (string) insecureRegistries are registries which do not have a valid TLS certificates or only support HTTP connections. 11.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. internalRegistryHostname string internalRegistryHostname sets the hostname for the default internal image registry. The value must be in "hostname[:port]" format. This value is set by the image registry operator which controls the internal registry hostname. For backward compatibility, users can still use OPENSHIFT_DEFAULT_REGISTRY environment variable but this setting overrides the environment variable. 11.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/images DELETE : delete collection of Image GET : list objects of kind Image POST : create an Image /apis/config.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/config.openshift.io/v1/images/{name}/status GET : read status of the specified Image PATCH : partially update status of the specified Image PUT : replace status of the specified Image 11.2.1. /apis/config.openshift.io/v1/images Table 11.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Image Table 11.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Image Table 11.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.5. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 11.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.7. Body parameters Parameter Type Description body Image schema Table 11.8. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 11.2.2. /apis/config.openshift.io/v1/images/{name} Table 11.9. Global path parameters Parameter Type Description name string name of the Image Table 11.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Image Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.12. Body parameters Parameter Type Description body DeleteOptions schema Table 11.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 11.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.15. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 11.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.17. Body parameters Parameter Type Description body Patch schema Table 11.18. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body Image schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 11.2.3. /apis/config.openshift.io/v1/images/{name}/status Table 11.22. Global path parameters Parameter Type Description name string name of the Image Table 11.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Image Table 11.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.25. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Image Table 11.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.27. Body parameters Parameter Type Description body Patch schema Table 11.28. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Image Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.30. Body parameters Parameter Type Description body Image schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/image-config-openshift-io-v1
Chapter 4. Managing namespace buckets
Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.
[ "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/Managing-namespace-buckets_rhodf
Chapter 1. Cluster Observability Operator release notes
Chapter 1. Cluster Observability Operator release notes The Cluster Observability Operator (COO) is an optional OpenShift Container Platform Operator that enables administrators to create standalone monitoring stacks that are independently configurable for use by different services and users. The COO complements the built-in monitoring capabilities of OpenShift Container Platform. You can deploy it in parallel with the default platform and user workload monitoring stacks managed by the Cluster Monitoring Operator (CMO). These release notes track the development of the Cluster Observability Operator in OpenShift Container Platform. 1.1. Cluster Observability Operator 1.0 1.1.1. New features and enhancements COO is now enabled for OpenShift Container Platform platform monitoring. ( COO-476 ) Implements HTTPS support for COO web server. ( COO-480 ) Implements authn/authz for COO web server. ( COO-481 ) Configures ServiceMonitor resource to collect metrics from COO. ( COO-482 ) Adds operatorframework.io/cluster-monitoring=true annotation to the OLM bundle. ( COO-483 ) Defines the alerting strategy for COO . ( COO-484 ) Configures PrometheusRule for alerting. ( COO-485 ) Support level annotations have been added to the UIPlugin CR when created. The support level is based on the plugin type, with values of DevPreview , TechPreview , or GeneralAvailability . ( COO-318 ) You can now configure the Alertmanager scheme and tlsConfig fields in the Prometheus CR. ( COO-219 ) The extended Technical Preview for the troubleshooting panel adds support for correlating traces with Kubernetes resources and directly with other observable signals including logs, alerts, metrics, and network events. ( COO-450 ) You can select a Tempo instance and tenant when you navigate to the tracing page by clicking Observe Tracing in the web console. The preview troubleshooting panel only works with the openshift-tracing / platform instance and the platform tenant. The troubleshooting panel works best in the Administrator perspective. It has limited functionality in the Developer perspective due to authorization issues with some back ends, most notably Prometheus for metrics and alerts. This will be addressed in a future release. The following table provides information about which features are available depending on the version of Cluster Observability Operator and OpenShift Container Platform: COO Version OCP Versions Distributed Tracing Logging Troubleshooting Panel 1.0+ 4.12 - 4.15 ✔ ✔ ✘ 1.0+ 4.16+ ✔ ✔ ✔ 1.1.2. CVEs CVE-2023-26159 CVE-2024-28849 CVE-2024-45338 1.1.3. Bug fixes Previously, the default namespace for the COO installation was openshift-operators . With this release, the defaullt namespace changes to openshift-cluster-observability-operator . ( COO-32 ) Previously, korrel8r was only able to parse time series selector expressions. With this release, korrel8r can parse any valid PromQL expression to extract the time series selectors that it uses for correlation. ( COO-558 ) Previously, when viewing a Tempo instance from the Distributed Tracing UI plugin, the scatter plot graph showing the traces duration was not rendered correctly. The bubble size was too large and overlapped the x and y axis. With this release, the graph is rendered correctly. ( COO-319 ) 1.2. Features available on older, Technology Preview releases The following table provides information about which features are available depending on older version of Cluster Observability Operator and OpenShift Container Platform: COO Version OCP Versions Dashboards Distributed Tracing Logging Troubleshooting Panel 0.2.0 4.11 ✔ ✘ ✘ ✘ 0.3.0+, 0.4.0+ 4.11 - 4.15 ✔ ✔ ✔ ✘ 0.3.0+, 0.4.0+ 4.16+ ✔ ✔ ✔ ✔ 1.3. Cluster Observability Operator 0.4.1 The following advisory is available for Cluster Observability Operator 0.4.1: RHEA-2024:8040 Cluster Observability Operator 0.4.1 1.3.1. New features and enhancements You can now configure WebTLS for Prometheus and Alertmanager. 1.3.2. CVEs CVE-2024-6104 CVE-2024-24786 1.3.3. Bug fixes Previously, when you deleted the dashboard UI plugin, the consoles.operator.openshift.io resource still contained console-dashboards-plugin . This release resolves the issue. ( COO-152 ) Previously, the web console did not display the correct icon for Red Hat COO . This release resolves the issue. ( COO-353 ) Previously, when you installed the COO from the web console, the support section contained an invalid link. This release resolves the issue. ( COO-354 ) Previously, the cluster service version (CSV) for COO linked to an unofficial version of the documentation. This release resolves the issue. ( COO-356 ) 1.4. Cluster Observability Operator 0.4.0 The following advisory is available for Cluster Observability Operator 0.4.0: RHEA-2024:6699 Cluster Observability Operator 0.4.0 1.4.1. New features and enhancements 1.4.1.1. Troubleshooting UI plugin The troubleshooting UI panel has been improved so you can now select and focus on a specific starting signal. There is more visibility into Korrel8r queries, with the option of selecting the depth. Users of OpenShift Container Platform version 4.17+ can access the troubleshooting UI panel from the Application Launcher . Alternatively, on versions 4.16+, you can access it in the web console by clicking on Observe Alerting . For more information, see troubleshooting UI plugin . 1.4.1.2. Distributed tracing UI plugin The distributed tracing UI plugin has been enhanced, with a Gantt chart now available for exploring traces. For more information, see distributed tracing UI plugin . 1.4.2. Bug fixes Previously, metrics were not available to normal users when accessed in the Developer perspective of the web console, by clicking on Observe Logs . This release resolves the issue. ( COO-288 ) Previously, the troubleshooting UI plugin used the wrong filter for network observability. This release resolves the issue. ( COO-299 ) Previously, the troubleshooting UI plugin generated an incorrect URL for pod label searches. This release resolves the issue. ( COO-298 ) Previously, there was an authorization vulnerability in the Distributed tracing UI plugin. This release resolves the issue and the Distributed tracing UI plugin has been hardened by using only multi-tenant TempoStack and TempoMonolithic instances going forward. 1.5. Cluster Observability Operator 0.3.2 The following advisory is available for Cluster Observability Operator 0.3.2: RHEA-2024:5985 Cluster Observability Operator 0.3.2 1.5.1. New features and enhancements With this release, you can now use tolerations and node selectors with MonitoringStack components. 1.5.2. Bug fixes Previously, the logging UIPlugin was not in the Available state and the logging pod was not created, when installed on a specific version of OpenShift Container Platform. This release resolves the issue. ( COO-260 ) 1.6. Cluster Observability Operator 0.3.0 The following advisory is available for Cluster Observability Operator 0.3.0: RHEA-2024:4399 Cluster Observability Operator 0.3.0 1.6.1. New features and enhancements With this release, the Cluster Observability Operator adds backend support for future OpenShift Container Platform observability web console UI plugins and observability components. 1.7. Cluster Observability Operator 0.2.0 The following advisory is available for Cluster Observability Operator 0.2.0: RHEA-2024:2662 Cluster Observability Operator 0.2.0 1.7.1. New features and enhancements With this release, the Cluster Observability Operator supports installing and managing observability-related plugins for the OpenShift Container Platform web console user interface (UI). ( COO-58 ) 1.8. Cluster Observability Operator 0.1.3 The following advisory is available for Cluster Observability Operator 0.1.3: RHEA-2024:1744 Cluster Observability Operator 0.1.3 1.8.1. Bug fixes Previously, if you tried to access the Prometheus web user interface (UI) at http://<prometheus_url>:9090/graph , the following error message would display: Error opening React index.html: open web/ui/static/react/index.html: no such file or directory . This release resolves the issue, and the Prometheus web UI now displays correctly. ( COO-34 ) 1.9. Cluster Observability Operator 0.1.2 The following advisory is available for Cluster Observability Operator 0.1.2: RHEA-2024:1534 Cluster Observability Operator 0.1.2 1.9.1. CVEs CVE-2023-45142 1.9.2. Bug fixes Previously, certain cluster service version (CSV) annotations were not included in the metadata for COO. Because of these missing annotations, certain COO features and capabilities did not appear in the package manifest or in the OperatorHub user interface. This release adds the missing annotations, thereby resolving this issue. ( COO-11 ) Previously, automatic updates of the COO did not work, and a newer version of the Operator did not automatically replace the older version, even though the newer version was available in OperatorHub. This release resolves the issue. ( COO-12 ) Previously, Thanos Querier only listened for network traffic on port 9090 of 127.0.0.1 ( localhost ), which resulted in a 502 Bad Gateway error if you tried to reach the Thanos Querier service. With this release, the Thanos Querier configuration has been updated so that the component now listens on the default port (10902), thereby resolving the issue. As a result of this change, you can also now modify the port via server side apply (SSA) and add a proxy chain, if required. ( COO-14 ) 1.10. Cluster Observability Operator 0.1.1 The following advisory is available for Cluster Observability Operator 0.1.1: 2024:0550 Cluster Observability Operator 0.1.1 1.10.1. New features and enhancements This release updates the Cluster Observability Operator to support installing the Operator in restricted networks or disconnected environments. 1.11. Cluster Observability Operator 0.1 This release makes a Technology Preview version of the Cluster Observability Operator available on OperatorHub.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/cluster_observability_operator/cluster-observability-operator-release-notes
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/providing-feedback
3.5. Array Operations in SystemTap
3.5. Array Operations in SystemTap This section enumerates some of the most commonly used array operations in SystemTap. 3.5.1. Assigning an Associated Value Use = to set an associated value to indexed unique pairs, as in: array_name [ index_expression ] = value Example 3.11, "Basic Array Statements" shows a very basic example of how to set an explicit associated value to a unique key. You can also use a handler function as both your index_expression and value . For example, you can use arrays to set a timestamp as the associated value to a process name (which you wish to use as your unique key), as in: Example 3.12. Associating Timestamps to Process Names arr[tid()] = gettimeofday_s() Whenever an event invokes the statement in Example 3.12, "Associating Timestamps to Process Names" , SystemTap returns the appropriate tid() value (that is, the ID of a thread, which is then used as the unique key). At the same time, SystemTap also uses the function gettimeofday_s() to set the corresponding timestamp as the associated value to the unique key defined by the function tid() . This creates an array composed of key pairs containing thread IDs and timestamps. In this same example, if tid() returns a value that is already defined in the array arr , the operator will discard the original associated value to it, and replace it with the current timestamp from gettimeofday_s() . 3.5.2. Reading Values From Arrays You can also read values from an array the same way you would read the value of a variable. To do so, include the array_name [ index_expression ] statement as an element in a mathematical expression. For example: Example 3.13. Using Array Values in Simple Computations delta = gettimeofday_s() - arr[tid()] This example assumes that the array arr was built using the construct in Example 3.12, "Associating Timestamps to Process Names" (from Section 3.5.1, "Assigning an Associated Value" ). This sets a timestamp that will serve as a reference point , to be used in computing for delta . The construct in Example 3.13, "Using Array Values in Simple Computations" computes a value for the variable delta by subtracting the associated value of the key tid() from the current gettimeofday_s() . The construct does this by reading the value of tid() from the array. This particular construct is useful for determining the time between two events, such as the start and completion of a read operation. Note If the index_expression cannot find the unique key, it returns a value of 0 (for numerical operations, such as Example 3.13, "Using Array Values in Simple Computations" ) or a null (empty) string value (for string operations) by default. 3.5.3. Incrementing Associated Values Use ++ to increment the associated value of a unique key in an array, as in: array_name [ index_expression ] ++ Again, you can also use a handler function for your index_expression . For example, if you wanted to tally how many times a specific process performed a read to the virtual file system (using the vfs.read event), you can use the following probe: Example 3.14. vfsreads.stp probe vfs.read { reads[execname()] ++ } In Example 3.14, "vfsreads.stp" , the first time that the probe returns the process name gnome-terminal (that is, the first time gnome-terminal performs a VFS read), that process name is set as the unique key gnome-terminal with an associated value of 1. The time that the probe returns the process name gnome-terminal , SystemTap increments the associated value of gnome-terminal by 1. SystemTap performs this operation for all process names as the probe returns them. 3.5.4. Processing Multiple Elements in an Array Once you have collected enough information in an array, you will need to retrieve and process all elements in that array to make it useful. Consider Example 3.14, "vfsreads.stp" : the script collects information about how many VFS reads each process performs, but does not specify what to do with it. The simplest method of making Example 3.14, "vfsreads.stp" useful is to print the key pairs in the reads array. The best way to process all key pairs in an array (as an iteration) is to use the foreach statement. Consider the following example: Example 3.15. cumulative-vfsreads.stp global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { foreach (count in reads) printf("%s : %d \n", count, reads[count]) } In the second probe of Example 3.15, "cumulative-vfsreads.stp" , the foreach statement uses the count variable to reference each iteration of a unique key in the reads array. The reads[count] array statement in the same probe retrieves the associated value of each unique key. Given what we know about the first probe in Example 3.15, "cumulative-vfsreads.stp" , the script prints VFS-read statistics every 3 seconds, displaying names of processes that performed a VFS-read along with a corresponding VFS-read count. Now, remember that the foreach statement in Example 3.15, "cumulative-vfsreads.stp" prints all iterations of process names in the array, and in no particular order. You can instruct the script to process the iterations in a particular order by using + (ascending) or - (descending). In addition, you can also limit the number of iterations the script needs to process with the limit value option. For example, consider the following replacement probe: probe timer.s(3) { foreach (count in reads- limit 10) printf("%s : %d \n", count, reads[count]) } This foreach statement instructs the script to process the elements in the array reads in descending order (of associated value). The limit 10 option instructs the foreach to only process the first ten iterations (that is, print the first 10, starting with the highest value). 3.5.5. Clearing/Deleting Arrays and Array Elements Sometimes, you may need to clear the associated values in array elements, or reset an entire array for re-use in another probe. Example 3.15, "cumulative-vfsreads.stp" in Section 3.5.4, "Processing Multiple Elements in an Array" allows you to track how the number of VFS reads per process grows over time, but it does not show you the number of VFS reads each process makes per 3-second period. To do that, you will need to clear the values accumulated by the array. You can accomplish this using the delete operator to delete elements in an array, or an entire array. Consider the following example: Example 3.16. noncumulative-vfsreads.stp global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { foreach (count in reads) printf("%s : %d \n", count, reads[count]) delete reads } In Example 3.16, "noncumulative-vfsreads.stp" , the second probe prints the number of VFS reads each process made within the probed 3-second period only . The delete reads statement clears the reads array within the probe. Note You can have multiple array operations within the same probe. Using the examples from Section 3.5.4, "Processing Multiple Elements in an Array" and Section 3.5.5, "Clearing/Deleting Arrays and Array Elements" , you can track the number of VFS reads each process makes per 3-second period and tally the cumulative VFS reads of those same processes. Consider the following example: global reads, totalreads probe vfs.read { reads[execname()] ++ totalreads[execname()] ++ } probe timer.s(3) { printf("=======\n") foreach (count in reads-) printf("%s : %d \n", count, reads[count]) delete reads } probe end { printf("TOTALS\n") foreach (total in totalreads-) printf("%s : %d \n", total, totalreads[total]) } In this example, the arrays reads and totalreads track the same information, and are printed out in a similar fashion. The only difference here is that reads is cleared every 3-second period, whereas totalreads keeps growing. 3.5.6. Using Arrays in Conditional Statements You can also use associative arrays in if statements. This is useful if you want to execute a subroutine once a value in the array matches a certain condition. Consider the following example: Example 3.17. vfsreads-print-if-1kb.stp global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { printf("=======\n") foreach (count in reads-) if (reads[count] >= 1024) printf("%s : %dkB \n", count, reads[count]/1024) else printf("%s : %dB \n", count, reads[count]) } Every three seconds, Example 3.17, "vfsreads-print-if-1kb.stp" prints out a list of all processes, along with how many times each process performed a VFS read. If the associated value of a process name is equal or greater than 1024, the if statement in the script converts and prints it out in kB . Testing for Membership You can also test whether a specific unique key is a member of an array. Further, membership in an array can be used in if statements, as in: if([ index_expression ] in array_name ) statement To illustrate this, consider the following example: Example 3.18. vfsreads-stop-on-stapio2.stp global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { printf("=======\n") foreach (count in reads+) printf("%s : %d \n", count, reads[count]) if(["stapio"] in reads) { printf("stapio read detected, exiting\n") exit() } } The if(["stapio"] in reads) statement instructs the script to print stapio read detected, exiting once the unique key stapio is added to the array reads . 3.5.7. Computing for Statistical Aggregates Statistical aggregates are used to collect statistics on numerical values where it is important to accumulate new data quickly and in large volume (storing only aggregated stream statistics). Statistical aggregates can be used in global variables or as elements in an array. To add value to a statistical aggregate, use the operator <<< value . Example 3.19. stat-aggregates.stp global reads probe vfs.read { reads[execname()] <<< count } In Example 3.19, "stat-aggregates.stp" , the operator <<< count stores the amount returned by count to the associated value of the corresponding execname() in the reads array. Remember, these values are stored ; they are not added to the associated values of each unique key, nor are they used to replace the current associated values. In a manner of speaking, think of it as having each unique key ( execname() ) having multiple associated values, accumulating with each probe handler run. Note In the context of Example 3.19, "stat-aggregates.stp" , count returns the amount of data written by the returned execname() to the virtual file system. To extract data collected by statistical aggregates, use the syntax format @ extractor ( variable/array index expression ) . extractor can be any of the following integer extractors: count Returns the number of all values stored into the variable/array index expression. Given the sample probe in Example 3.19, "stat-aggregates.stp" , the expression @count(writes[execname()]) will return how many values are stored in each unique key in array writes . sum Returns the sum of all values stored into the variable/array index expression. Again, given sample probe in Example 3.19, "stat-aggregates.stp" , the expression @sum(writes[execname()]) will return the total of all values stored in each unique key in array writes . min Returns the smallest among all the values stored in the variable/array index expression. max Returns the largest among all the values stored in the variable/array index expression. avg Returns the average of all values stored in the variable/array index expression. When using statistical aggregates, you can also build array constructs that use multiple index expressions (to a maximum of 5). This is helpful in capturing additional contextual information during a probe. For example: Example 3.20. Multiple Array Indexes global reads probe vfs.read { reads[execname(),pid()] <<< 1 } probe timer.s(3) { foreach([var1,var2] in reads) printf("%s (%d) : %d \n", var1, var2, @count(reads[var1,var2])) } In Example 3.20, "Multiple Array Indexes" , the first probe tracks how many times each process performs a VFS read. What makes this different from earlier examples is that this array associates a performed read to both a process name and its corresponding process ID. The second probe in Example 3.20, "Multiple Array Indexes" demonstrates how to process and print the information collected by the array reads . Note how the foreach statement uses the same number of variables ( var1 and var2 ) contained in the first instance of the array reads from the first probe.
[ "array_name [ index_expression ] = value", "arr[tid()] = gettimeofday_s()", "delta = gettimeofday_s() - arr[tid()]", "array_name [ index_expression ] ++", "probe vfs.read { reads[execname()] ++ }", "global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { foreach (count in reads) printf(\"%s : %d \\n\", count, reads[count]) }", "probe timer.s(3) { foreach (count in reads- limit 10) printf(\"%s : %d \\n\", count, reads[count]) }", "global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { foreach (count in reads) printf(\"%s : %d \\n\", count, reads[count]) delete reads }", "global reads, totalreads probe vfs.read { reads[execname()] ++ totalreads[execname()] ++ } probe timer.s(3) { printf(\"=======\\n\") foreach (count in reads-) printf(\"%s : %d \\n\", count, reads[count]) delete reads } probe end { printf(\"TOTALS\\n\") foreach (total in totalreads-) printf(\"%s : %d \\n\", total, totalreads[total]) }", "global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { printf(\"=======\\n\") foreach (count in reads-) if (reads[count] >= 1024) printf(\"%s : %dkB \\n\", count, reads[count]/1024) else printf(\"%s : %dB \\n\", count, reads[count]) }", "if([ index_expression ] in array_name ) statement", "global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { printf(\"=======\\n\") foreach (count in reads+) printf(\"%s : %d \\n\", count, reads[count]) if([\"stapio\"] in reads) { printf(\"stapio read detected, exiting\\n\") exit() } }", "global reads probe vfs.read { reads[execname()] <<< count }", "global reads probe vfs.read { reads[execname(),pid()] <<< 1 } probe timer.s(3) { foreach([var1,var2] in reads) printf(\"%s (%d) : %d \\n\", var1, var2, @count(reads[var1,var2])) }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/arrayoperators
Chapter 24. Queue Statistics Tapset
Chapter 24. Queue Statistics Tapset This family of functions is used to track performance of queuing systems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/queue_stats-dot-stp
probe::scsi.iocompleted
probe::scsi.iocompleted Name probe::scsi.iocompleted - SCSI mid-layer running the completion processing for block device I/O requests Synopsis Values device_state_str The current state of the device, as a string dev_id The scsi device id channel The channel number data_direction The data_direction specifies whether this command is from/to the device lun The lun number host_no The host number data_direction_str Data direction, as a string device_state The current state of the device req_addr The current struct request pointer, as a number goodbytes The bytes completed
[ "scsi.iocompleted" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-scsi-iocompleted
Chapter 75. Executing a test scenario using the KIE Server REST API
Chapter 75. Executing a test scenario using the KIE Server REST API Directly interacting with the REST endpoints of KIE Server provides the most separation between the calling code and the decision logic definition. You can use the KIE Server REST API to execute the test scenarios externally. It executes the test scenarios against the deployed project. Note This functionality is disabled by default, use org.kie.scenariosimulation.server.ext.disabled system property to enable it. For more information about the KIE Server REST API, see Interacting with Red Hat Process Automation Manager using KIE APIs . Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Process Automation Manager installation . You have built the project as a KJAR artifact and deployed it to KIE Server. You have the ID of the KIE container. Procedure Determine the base URL for accessing the KIE Server REST API endpoints. This requires knowing the following values (with the default local deployment values as an example): Host ( localhost ) Port ( 8080 ) Root context ( kie-server ) Base REST path ( services/rest/ ) Example base URL in local deployment for the traffic violations project: http://localhost:8080/kie-server/services/rest/server/containers/traffic_1.0.0-SNAPSHOT Determine user authentication requirements. When users are defined directly in the KIE Server configuration, HTTP Basic authentication is used and requires the user name and password. Successful requests require that the user have the kie-server role. The following example demonstrates how to add credentials to a curl request: If KIE Server is configured with Red Hat Single Sign-On, the request must include a bearer token: curl -H "Authorization: bearer USDTOKEN" <request> Specify the format of the request and response. The REST API endpoints work with XML format and are set using request headers: XML Execute the test scenario: [POST] server/containers/{containerId}/scesim Example curl request: Example XML request: <ScenarioSimulationModel version="1.8"> <simulation> <scesimModelDescriptor> <factMappings> <FactMapping> <expressionElements/> <expressionIdentifier> <name>Index</name> <type>OTHER</type> </expressionIdentifier> <factIdentifier> <name>#</name> <className>java.lang.Integer</className> </factIdentifier> <className>java.lang.Integer</className> <factAlias>#</factAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>70.0</columnWidth> </FactMapping> <FactMapping> <expressionElements/> <expressionIdentifier> <name>Description</name> <type>OTHER</type> </expressionIdentifier> <factIdentifier> <name>Scenario description</name> <className>java.lang.String</className> </factIdentifier> <className>java.lang.String</className> <factAlias>Scenario description</factAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>300.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Driver</step> </ExpressionElement> <ExpressionElement> <step>Points</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|1</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Driver</name> <className>Driver</className> </factIdentifier> <className>number</className> <factAlias>Driver</factAlias> <expressionAlias>Points</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Type</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|6</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>Type</className> <factAlias>Violation</factAlias> <expressionAlias>Type</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Speed Limit</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|7</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>number</className> <factAlias>Violation</factAlias> <expressionAlias>Speed Limit</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Actual Speed</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|8</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>number</className> <factAlias>Violation</factAlias> <expressionAlias>Actual Speed</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Fine</step> </ExpressionElement> <ExpressionElement> <step>Points</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|11</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <className>number</className> <factAlias>Fine</factAlias> <expressionAlias>Points</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Fine</step> </ExpressionElement> <ExpressionElement> <step>Amount</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|12</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <className>number</className> <factAlias>Fine</factAlias> <expressionAlias>Amount</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Should the driver be suspended?</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|13</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Should the driver be suspended?</name> <className>Should the driver be suspended?</className> </factIdentifier> <className>string</className> <factAlias>Should the driver be suspended?</factAlias> <expressionAlias>value</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> </factMappings> </scesimModelDescriptor> <scesimData> <Scenario> <factMappingValues> <FactMappingValue> <factIdentifier> <name>Scenario description</name> <className>java.lang.String</className> </factIdentifier> <expressionIdentifier> <name>Description</name> <type>OTHER</type> </expressionIdentifier> <rawValue class="string">Above speed limit: 10km/h and 30 km/h</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Driver</name> <className>Driver</className> </factIdentifier> <expressionIdentifier> <name>0|1</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class="string">10</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|6</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class="string">&quot;speed&quot;</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|7</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class="string">100</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|8</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class="string">120</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <expressionIdentifier> <name>0|11</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class="string">3</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <expressionIdentifier> <name>0|12</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class="string">500</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Should the driver be suspended?</name> <className>Should the driver be suspended?</className> </factIdentifier> <expressionIdentifier> <name>0|13</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class="string">&quot;No&quot;</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>#</name> <className>java.lang.Integer</className> </factIdentifier> <expressionIdentifier> <name>Index</name> <type>OTHER</type> </expressionIdentifier> <rawValue class="string">1</rawValue> </FactMappingValue> </factMappingValues> </Scenario> </scesimData> </simulation> <background> <scesimModelDescriptor> <factMappings> <FactMapping> <expressionElements/> <expressionIdentifier> <name>1|1</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Empty</name> <className>java.lang.Void</className> </factIdentifier> <className>java.lang.Void</className> <factAlias>Instance 1</factAlias> <expressionAlias>PROPERTY 1</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> </factMappings> </scesimModelDescriptor> <scesimData> <BackgroundData> <factMappingValues> <FactMappingValue> <factIdentifier> <name>Empty</name> <className>java.lang.Void</className> </factIdentifier> <expressionIdentifier> <name>1|1</name> <type>GIVEN</type> </expressionIdentifier> </FactMappingValue> </factMappingValues> </BackgroundData> </scesimData> </background> <settings> <dmnFilePath>src/main/resources/org/kie/example/traffic/traffic_violation/Traffic Violation.dmn</dmnFilePath> <type>DMN</type> <fileName></fileName> <dmnNamespace>https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF</dmnNamespace> <dmnName>Traffic Violation</dmnName> <skipFromBuild>false</skipFromBuild> <stateless>false</stateless> </settings> <imports> <imports/> </imports> </ScenarioSimulationModel> Example XML response: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="Test Scenario successfully executed"> <scenario-simulation-result> <run-count>5</run-count> <ignore-count>0</ignore-count> <run-time>31</run-time> </scenario-simulation-result> </response>
[ "curl -u username:password <request>", "curl -H \"Authorization: bearer USDTOKEN\" <request>", "curl -H \"accept: application/xml\" -H \"content-type: application/xml\"", "curl -X POST \"http://localhost:8080/kie-server/services/rest/server/containers/traffic_1.0.0-SNAPSHOT/scesim\"\\ -u 'wbadmin:wbadmin;' \\ -H \"accept: application/xml\" -H \"content-type: application/xml\"\\ -d @Violation.scesim", "<ScenarioSimulationModel version=\"1.8\"> <simulation> <scesimModelDescriptor> <factMappings> <FactMapping> <expressionElements/> <expressionIdentifier> <name>Index</name> <type>OTHER</type> </expressionIdentifier> <factIdentifier> <name>#</name> <className>java.lang.Integer</className> </factIdentifier> <className>java.lang.Integer</className> <factAlias>#</factAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>70.0</columnWidth> </FactMapping> <FactMapping> <expressionElements/> <expressionIdentifier> <name>Description</name> <type>OTHER</type> </expressionIdentifier> <factIdentifier> <name>Scenario description</name> <className>java.lang.String</className> </factIdentifier> <className>java.lang.String</className> <factAlias>Scenario description</factAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>300.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Driver</step> </ExpressionElement> <ExpressionElement> <step>Points</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|1</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Driver</name> <className>Driver</className> </factIdentifier> <className>number</className> <factAlias>Driver</factAlias> <expressionAlias>Points</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Type</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|6</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>Type</className> <factAlias>Violation</factAlias> <expressionAlias>Type</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Speed Limit</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|7</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>number</className> <factAlias>Violation</factAlias> <expressionAlias>Speed Limit</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Violation</step> </ExpressionElement> <ExpressionElement> <step>Actual Speed</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|8</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <className>number</className> <factAlias>Violation</factAlias> <expressionAlias>Actual Speed</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Fine</step> </ExpressionElement> <ExpressionElement> <step>Points</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|11</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <className>number</className> <factAlias>Fine</factAlias> <expressionAlias>Points</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Fine</step> </ExpressionElement> <ExpressionElement> <step>Amount</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|12</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <className>number</className> <factAlias>Fine</factAlias> <expressionAlias>Amount</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> <FactMapping> <expressionElements> <ExpressionElement> <step>Should the driver be suspended?</step> </ExpressionElement> </expressionElements> <expressionIdentifier> <name>0|13</name> <type>EXPECT</type> </expressionIdentifier> <factIdentifier> <name>Should the driver be suspended?</name> <className>Should the driver be suspended?</className> </factIdentifier> <className>string</className> <factAlias>Should the driver be suspended?</factAlias> <expressionAlias>value</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> </factMappings> </scesimModelDescriptor> <scesimData> <Scenario> <factMappingValues> <FactMappingValue> <factIdentifier> <name>Scenario description</name> <className>java.lang.String</className> </factIdentifier> <expressionIdentifier> <name>Description</name> <type>OTHER</type> </expressionIdentifier> <rawValue class=\"string\">Above speed limit: 10km/h and 30 km/h</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Driver</name> <className>Driver</className> </factIdentifier> <expressionIdentifier> <name>0|1</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class=\"string\">10</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|6</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class=\"string\">&quot;speed&quot;</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|7</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class=\"string\">100</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Violation</name> <className>Violation</className> </factIdentifier> <expressionIdentifier> <name>0|8</name> <type>GIVEN</type> </expressionIdentifier> <rawValue class=\"string\">120</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <expressionIdentifier> <name>0|11</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class=\"string\">3</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Fine</name> <className>Fine</className> </factIdentifier> <expressionIdentifier> <name>0|12</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class=\"string\">500</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>Should the driver be suspended?</name> <className>Should the driver be suspended?</className> </factIdentifier> <expressionIdentifier> <name>0|13</name> <type>EXPECT</type> </expressionIdentifier> <rawValue class=\"string\">&quot;No&quot;</rawValue> </FactMappingValue> <FactMappingValue> <factIdentifier> <name>#</name> <className>java.lang.Integer</className> </factIdentifier> <expressionIdentifier> <name>Index</name> <type>OTHER</type> </expressionIdentifier> <rawValue class=\"string\">1</rawValue> </FactMappingValue> </factMappingValues> </Scenario> </scesimData> </simulation> <background> <scesimModelDescriptor> <factMappings> <FactMapping> <expressionElements/> <expressionIdentifier> <name>1|1</name> <type>GIVEN</type> </expressionIdentifier> <factIdentifier> <name>Empty</name> <className>java.lang.Void</className> </factIdentifier> <className>java.lang.Void</className> <factAlias>Instance 1</factAlias> <expressionAlias>PROPERTY 1</expressionAlias> <factMappingValueType>NOT_EXPRESSION</factMappingValueType> <columnWidth>114.0</columnWidth> </FactMapping> </factMappings> </scesimModelDescriptor> <scesimData> <BackgroundData> <factMappingValues> <FactMappingValue> <factIdentifier> <name>Empty</name> <className>java.lang.Void</className> </factIdentifier> <expressionIdentifier> <name>1|1</name> <type>GIVEN</type> </expressionIdentifier> </FactMappingValue> </factMappingValues> </BackgroundData> </scesimData> </background> <settings> <dmnFilePath>src/main/resources/org/kie/example/traffic/traffic_violation/Traffic Violation.dmn</dmnFilePath> <type>DMN</type> <fileName></fileName> <dmnNamespace>https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF</dmnNamespace> <dmnName>Traffic Violation</dmnName> <skipFromBuild>false</skipFromBuild> <stateless>false</stateless> </settings> <imports> <imports/> </imports> </ScenarioSimulationModel>", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <response type=\"SUCCESS\" msg=\"Test Scenario successfully executed\"> <scenario-simulation-result> <run-count>5</run-count> <ignore-count>0</ignore-count> <run-time>31</run-time> </scenario-simulation-result> </response>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-scenarios-execution-rest-api-proc
Appendix G. System Accounts
Appendix G. System Accounts G.1. System Accounts G.1.1. Red Hat Virtualization Manager User Accounts A number of system user accounts are created to support Red Hat Virtualization when the rhevm package is installed. Each system user has a default user identifier (UID). The system user accounts created are: The vdsm user (UID 36 ). Required for support tools that mount and access NFS storage domains. The ovirt user (UID 108 ). Owner of the ovirt-engine Red Hat JBoss Enterprise Application Platform instance. The ovirt-vmconsole user (UID 498 ). Required for the guest serial console. G.1.2. Red Hat Virtualization Manager Groups A number of system user groups are created to support Red Hat Virtualization when the rhevm package is installed. Each system user group has a default group identifier (GID). The system user groups created are: The kvm group (GID 36 ). Group members include: The vdsm user. The ovirt group (GID 108 ). Group members include: The ovirt user. The ovirt-vmconsole group (GID 498 ). Group members include: The ovirt-vmconsole user. G.1.3. Virtualization Host User Accounts A number of system user accounts are created on the virtualization host when the vdsm and qemu-kvm-rhev packages are installed. Each system user has a default user identifier (UID). The system user accounts created are: The vdsm user (UID 36 ). The qemu user (UID 107 ). The sanlock user (UID 179 ). The ovirt-vmconsole user (UID 498 ). Important The user identifiers (UIDs) and group identifiers (GIDs) allocated may vary between systems. The vdsm user is fixed to a UID of 36 and the kvm group is fixed to a GID of 36 . If UID 36 or GID 36 is already used by another account on the system a conflict will arise during installation of the vdsm and qemu-kvm-rhev packages. G.1.4. Virtualization Host Groups A number of system user groups are created on the virtualization host when the vdsm and qemu-kvm-rhev packages are installed. Each system user group has a default group identifier (GID). The system user groups created are: The kvm group (GID 36 ). Group members include: The qemu user. The sanlock user. The qemu group (GID 107 ). Group members include: The vdsm user. The sanlock user. The ovirt-vmconsole group (GID 498 ). Group members include: The ovirt-vmconsole user. Important The user identifiers (UIDs) and group identifiers (GIDs) allocated may vary between systems. The vdsm user is fixed to a UID of 36 and the kvm group is fixed to a GID of 36 . If UID 36 or GID 36 is already used by another account on the system a conflict will arise during installation of the vdsm and qemu-kvm-rhev packages.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/appe-system_accounts
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/release_notes_for_red_hat_build_of_quarkus_3.15/making-open-source-more-inclusive
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.5/providing-direct-documentation-feedback_openjdk
Chapter 2. Understanding build configurations
Chapter 2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details. 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook.
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/understanding-buildconfigs
probe::stap.pass6.end
probe::stap.pass6.end Name probe::stap.pass6.end - Finished stap pass6 (cleanup) Synopsis stap.pass6.end Values session the systemtap_session variable s Description pass6.end fires just before main's return.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stap-pass6-end
Chapter 3. Installing the Red Hat Virtualization Manager
Chapter 3. Installing the Red Hat Virtualization Manager Installing the Red Hat Virtualization Manager involves the following steps: Preparing the Red Hat Virtualization Manager Machine Enabling the Red Hat Virtualization Manager Repositories Installing and Configuring the Red Hat Virtualization Manager Connecting to the Administration Portal 3.1. Preparing the Red Hat Virtualization Manager Machine The Red Hat Virtualization Manager must run on Red Hat Enterprise Linux 7. For detailed instructions on installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . This machine must meet the minimum Manager hardware requirements . To install the Red Hat Virtualization Manager on a system that does not have access to the Content Delivery Network, see Appendix A, Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation before configuring the Manager. By default, the Red Hat Virtualization Manager's configuration script, engine-setup , creates and configures the Manager database and Data Warehouse database automatically on the Manager machine. To set up either database, or both, manually, see Appendix B, Preparing a Local Manually Configured PostgreSQL Database before configuring the Manager. 3.2. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: 3.3. Installing and Configuring the Red Hat Virtualization Manager Install the package and dependencies for the Red Hat Virtualization Manager, and configure it using the engine-setup command. The script asks you a series of questions and, after you provide the required values for all questions, applies that configuration and starts the ovirt-engine service. Important The engine-setup command guides you through several distinct configuration stages, each comprising several steps that require user input. Suggested configuration defaults are provided in square brackets; if the suggested value is acceptable for a given step, press Enter to accept that value. You can run engine-setup --accept-defaults to automatically accept all questions that have default answers. This option should be used with caution and only if you are familiar with engine-setup . Procedure Ensure all packages are up to date: Note Reboot the machine if any kernel-related packages were updated. Install the rhvm package and dependencies. Run the engine-setup command to begin configuring the Red Hat Virtualization Manager: Press Enter to configure the Manager on this machine: Optionally install Open Virtual Network (OVN). Selecting Yes will install an OVN central server on the Manager machine, and add it to Red Hat Virtualization as an external network provider. The default cluster will use OVN as its default network provider, and hosts added to the default cluster will automatically be configured to communicate with OVN. For more information on using OVN networks in Red Hat Virtualization, see Adding Open Virtual Network (OVN) as an External Network Provider in the Administration Guide . Optionally allow engine-setup to configure the Image I/O Proxy ( ovirt-imageio-proxy ) to allow the Manager to upload virtual disks into storage domains. Optionally allow engine-setup to configure a websocket proxy server for allowing users to connect to virtual machines through the noVNC console: Important The websocket proxy and noVNC are Technology Preview features only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . Choose whether to configure Data Warehouse on this machine. Optionally allow access to a virtual machines's serial console from the command line. Additional configuration is required on the client machine to use this feature. See Opening a Serial Console to a Virtual Machine in the Virtual Machine Management Guide . Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter . Note that the automatically detected host name may be incorrect if you are using virtual hosts. The engine-setup command checks your firewall configuration and offers to open the ports used by the Manager for external communication, such as ports 80 and 443. If you do not allow engine-setup to modify your firewall configuration, you must manually open the ports used by the Manager. firewalld is configured as the firewall manager; iptables is deprecated. If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Specify whether to configure the Data Warehouse database on this machine, or on another machine: If you select Local , the engine-setup script can configure your database automatically (including adding a user and a database), or it can connect to a preconfigured local database: If you select Automatic by pressing Enter , no further action is required here. If you select Manual , input the following values for the manually configured local database: Note engine-setup requests these values after the Manager database is configured in the step. If you select Remote (for example, if you are installing the Data Warehouse service on the Manager machine, but have configured a remote Data Warehouse database), input the following values for the remote database server: Note engine-setup requests these values after the Manager database is configured in the step. Specify whether to configure the Manager database on this machine, or on another machine: If you select Local , the engine-setup command can configure your database automatically (including adding a user and a database), or it can connect to a preconfigured local database: If you select Automatic by pressing Enter , no further action is required here. If you select Manual , input the following values for the manually configured local database: Set a password for the automatically created administrative user of the Red Hat Virtualization Manager: Select Gluster , Virt , or Both : Both offers the greatest flexibility. In most cases, select Both . Virt allows you to run virtual machines in the environment; Gluster only allows you to manage GlusterFS from the Administration Portal. If you installed the OVN provider, you can choose to use the default credentials, or specify an alternative. Set the default value for the wipe_after_delete flag, which wipes the blocks of a virtual disk when the disk is deleted. The Manager uses certificates to communicate securely with its hosts. This certificate can also optionally be used to secure HTTPS communications with the Manager. Provide the organization name for the certificate: Optionally allow engine-setup to make the landing page of the Manager the default page presented by the Apache web server: By default, external SSL (HTTPS) communication with the Manager is secured with the self-signed certificate created earlier in the configuration to securely communicate with hosts. Alternatively, choose another certificate for external HTTPS connections; this does not affect how the Manager communicates with hosts: Choose how long Data Warehouse will retain collected data: Full uses the default values for the data storage settings listed in the Data Warehouse Guide (recommended when Data Warehouse is installed on a remote server). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. Review the installation settings, and press Enter to accept the values and proceed with the installation: When your environment has been configured, engine-setup displays details about how to access your environment. If you chose to manually configure the firewall, engine-setup provides a custom list of ports that need to be opened, based on the options selected during setup. engine-setup also saves your answers to a file that can be used to reconfigure the Manager using the same values, and outputs the location of the log file for the Red Hat Virtualization Manager configuration process. If you intend to link your Red Hat Virtualization environment with a directory server, configure the date and time to synchronize with the system clock used by the directory server to avoid unexpected account expiry issues. See Synchronizing the System Clock with a Remote Server in the Red Hat Enterprise Linux System Administrator's Guide for more information. Install the certificate authority according to the instructions provided by your browser. You can get the certificate authority's certificate by navigating to http:// manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing manager-fqdn with the FQDN that you provided during the installation. Log in to the Administration Portal, where you can add hosts and storage to the environment: 3.4. Connecting to the Administration Portal Access the Administration Portal using a web browser. In a web browser, navigate to https:// manager-fqdn /ovirt-engine , replacing manager-fqdn with the FQDN that you provided during installation. Note You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/ . For example: The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended. Click Administration Portal . An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time. Enter your User Name and Password . If you are logging in for the first time, use the user name admin along with the password that you specified during installation. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain. Click Log In . You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page. To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out . You are logged out of all portals and the Manager welcome screen displays.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "yum update", "yum install rhvm", "engine-setup", "Configure Engine on this host (Yes, No) [Yes]:", "Configure ovirt-provider-ovn (Yes, No) [Yes]:", "Configure Image I/O Proxy on this host? (Yes, No) [Yes]:", "Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:", "Please note: Data Warehouse is required for the engine. If you choose to not configure it on this host, you have to configure it on a remote host, and then configure the engine on this host so that it can access the database of the remote Data Warehouse host. Configure Data Warehouse on this host (Yes, No) [Yes]:", "Configure VM Console Proxy on this host (Yes, No) [Yes]:", "Host fully qualified DNS name of this server [ autodetected host name ]:", "Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. NOTICE: iptables is deprecated and will be removed in future releases Do you want Setup to configure the firewall? (Yes, No) [Yes]:", "Where is the DWH database located? (Local, Remote) [Local]:", "Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:", "DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password:", "DWH database host [localhost]: DWH database port [5432]: DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password:", "Where is the Engine database located? (Local, Remote) [Local]:", "Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:", "Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password:", "Engine admin password: Confirm engine admin password:", "Application mode (Both, Virt, Gluster) [Both]:", "Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: oVirt OVN provider user[admin@internal]: oVirt OVN provider password:", "Default SAN wipe after delete (Yes, No) [No]:", "Organization name for certificate [ autodetected domain-based name ]:", "Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications. Do you wish to set the application as the default web page of the server? (Yes, No) [Yes]:", "Setup can configure apache to use SSL using a certificate issued from the internal CA. Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:", "Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:", "Please confirm installation settings (OK, Cancel) [OK]:", "vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS=\" alias1.example.com alias2.example.com \"" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Installing_the_Red_Hat_Virtualization_Manager_SM_localDB_deploy
E.3. How to Identify and Assign IOMMU Groups
E.3. How to Identify and Assign IOMMU Groups This example demonstrates how to identify and assign the PCI devices that are present on the target system. For additional examples and information, see Section 16.7, "Assigning GPU Devices" . Procedure E.1. IOMMU groups List the devices Identify the devices in your system by running the virsh nodev-list device-type command. This example demonstrates how to locate the PCI devices. The output has been truncated for brevity. Locate the IOMMU grouping of a device For each device listed, further information about the device, including the IOMMU grouping, can be found using the virsh nodedev-dumpxml name-of-device command. For example, to find the IOMMU grouping for the PCI device named pci_0000_04_00_0 (PCI address 0000:04:00.0), use the following command: This command generates a XML dump similar to the one shown. <device> <name>pci_0000_04_00_0</name> <path>/sys/devices/pci0000:00/0000:00:1c.0/0000:04:00.0</path> <parent>pci_0000_00_1c_0</parent> <capability type='pci'> <domain>0</domain> <bus>4</bus> <slot>0</slot> <function>0</function> <product id='0x10d3'>82574L Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='8'> <!--This is the element block you will need to use--> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x0'/> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x4'/> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='2.5' width='1'/> <link validity='sta' speed='2.5' width='1'/> </pci-express> </capability> </device> Figure E.1. IOMMU Group XML View the PCI data In the output collected above, there is one IOMMU group with 4 devices. This is an example of a multi-function PCIe root port without ACS support. The two functions in slot 0x1c are PCIe root ports, which can be identified by running the lspci command (from the pciutils package): Repeat this step for the two PCIe devices on buses 0x04 and 0x05, which are endpoint devices. Assign the endpoints to the guest virtual machine In order to assign either one of the endpoints to a virtual machine, the endpoint which you are not assigning at the moment, must be bound to a VFIO compatible driver so that the IOMMU group is not split between user and host drivers. If for example, using the output received above, you were to configuring a virtual machine with only 04:00.0, the virtual machine will fail to start unless 05:00.0 is detached from host drivers. To detach 05:00.0, run the virsh nodedev-detach command as root: Assigning both endpoints to the virtual machine is another option for resolving this issue. Note that libvirt will automatically perform this operation for the attached devices when using the yes value for the managed attribute within the <hostdev> element. For example: <hostdev mode='subsystem' type='pci' managed='yes'> . See Note for more information. Note libvirt has two ways to handle PCI devices. They can be either managed or unmanaged. This is determined by the value given to the managed attribute within the <hostdev> element. When the device is managed, libvirt automatically detaches the device from the existing driver and then assigns it to the virtual machine by binding it to vfio-pci on boot (for the virtual machine). When the virtual machine is shutdown or deleted or the PCI device is detached from the virtual machine, libvirt unbinds the device from vfio-pci and rebinds it to the original driver. If the device is unmanaged, libvirt will not automate the process and you will have to ensure all of these management aspects as described are done before assigning the device to a virtual machine, and after the device is no longer used by the virtual machine you will have to reassign the devices as well. Failure to do these actions in an unmanaged device will cause the virtual machine to fail. Therefore, it may be easier to make sure that libvirt manages the device.
[ "virsh nodedev-list pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 [...] pci_0000_00_1c_0 pci_0000_00_1c_4 [...] pci_0000_01_00_0 pci_0000_01_00_1 [...] pci_0000_03_00_0 pci_0000_03_00_1 pci_0000_04_00_0 pci_0000_05_00_0 pci_0000_06_0d_0", "virsh nodedev-dumpxml pci_0000_04_00_0", "<device> <name>pci_0000_04_00_0</name> <path>/sys/devices/pci0000:00/0000:00:1c.0/0000:04:00.0</path> <parent>pci_0000_00_1c_0</parent> <capability type='pci'> <domain>0</domain> <bus>4</bus> <slot>0</slot> <function>0</function> <product id='0x10d3'>82574L Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='8'> <!--This is the element block you will need to use--> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x0'/> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x4'/> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='2.5' width='1'/> <link validity='sta' speed='2.5' width='1'/> </pci-express> </capability> </device>", "lspci -s 1c 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1 00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5", "lspci -s 4 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection This is used in the next step and is called 04:00.0 lspci -s 5 This is used in the next step and is called 05:00.0 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5755 Gigabit Ethernet PCI Express (rev 02)", "virsh nodedev-detach pci_0000_05_00_0 Device pci_0000_05_00_0 detached" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/iommu-example
Preface
Preface As an OpenShift AI administrator, you can manage the following resources: Custom workbench images Cluster PVC size Cluster storage classes OpenShift AI admin and user groups Jupyter notebook servers You can also specify whether to allow Red Hat to collect data about OpenShift AI usage in your cluster.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_resources/pr01
Chapter 5. Downgrading Red Hat Quay
Chapter 5. Downgrading Red Hat Quay Red Hat Quay only supports rolling back, or downgrading, to z-stream versions, for example, 3.7.2 3.7.1. Rolling back to y-stream versions (3.7.0 3.6.0) is not supported. This is because Red Hat Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Red Hat Quay. Database schema upgrades are not considered backwards compatible. Important Downgrading to z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Red Hat Quay deployment must be made in conjunction with the Red Hat Quay support and development teams. For more information, contact Red Hat Quay support.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/upgrade_red_hat_quay/downgrade-quay-deployment
Chapter 1. Knative Serving CLI commands
Chapter 1. Knative Serving CLI commands 1.1. kn service commands You can use the following commands to create and manage Knative services. 1.1.1. Creating serverless applications by using the Knative CLI Using the Knative ( kn ) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service: USD kn service create <service-name> --image <image> --tag <tag-value> Where: --image is the URI of the image for the application. --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service. Example command USD kn service create showcase \ --image quay.io/openshift-knative/showcase Example output Creating service 'showcase' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration "showcase" is waiting for a Revision to become ready. 3.857s ... 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'showcase' created with latest revision 'showcase-00001' and URL: http://showcase-default.apps-crc.testing 1.1.2. Updating serverless applications by using the Knative CLI You can use the kn service update command for interactive sessions on the command line as you build up a service incrementally. In contrast to the kn service apply command, when using the kn service update command you only have to specify the changes that you want to update, rather than the full configuration for the Knative service. Example commands Update a service by adding a new environment variable: USD kn service update <service_name> --env <key>=<value> Update a service by adding a new port: USD kn service update <service_name> --port 80 Update a service by adding new request and limit parameters: USD kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m Assign the latest tag to a revision: USD kn service update <service_name> --tag <revision_name>=latest Update a tag from testing to staging for the latest READY revision of a service: USD kn service update <service_name> --untag testing --tag @latest=staging Add the test tag to a revision that receives 10% of traffic, and send the rest of the traffic to the latest READY revision of a service: USD kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90 1.1.3. Applying service declarations You can declaratively configure a Knative service by using the kn service apply command. If the service does not exist it is created, otherwise the existing service is updated with the options that have been changed. The kn service apply command is especially useful for shell scripts or in a continuous integration pipeline, where users typically want to fully specify the state of the service in a single command to declare the target state. When using kn service apply you must provide the full configuration for the Knative service. This is different from the kn service update command, which only requires you to specify in the command the options that you want to update. Example commands Create a service: USD kn service apply <service_name> --image <image> Add an environment variable to a service: USD kn service apply <service_name> --image <image> --env <key>=<value> Read the service declaration from a JSON or YAML file: USD kn service apply <service_name> -f <filename> 1.1.4. Describing serverless applications by using the Knative CLI You can describe a Knative service by using the kn service describe command. Example commands Describe a service: USD kn service describe --verbose <service_name> The --verbose flag is optional but can be included to provide a more detailed description. The difference between a regular and verbose output is shown in the following examples: Example output without --verbose flag Name: showcase Namespace: default Age: 2m URL: http://showcase-default.apps.ocp.example.com Revisions: 100% @latest (showcase-00001) [1] (2m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m Example output with --verbose flag Name: showcase Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://showcase-default.apps.ocp.example.com Cluster: http://showcase.default.svc.cluster.local Revisions: 100% @latest (showcase-00001) [1] (3m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Env: GREET=Bonjour Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m Describe a service in YAML format: USD kn service describe <service_name> -o yaml Describe a service in JSON format: USD kn service describe <service_name> -o json Print the service URL only: USD kn service describe <service_name> -o url 1.2. kn service commands in offline mode 1.2.1. About the Knative CLI offline mode When you execute kn service commands, the changes immediately propagate to the cluster. However, as an alternative, you can execute kn service commands in offline mode. When you create a service in offline mode, no changes happen on the cluster, and instead the service descriptor file is created on your local machine. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After the descriptor file is created, you can manually modify it and track it in a version control system. You can also propagate changes to the cluster by using the kn service create -f , kn service apply -f , or oc apply -f commands on the descriptor files. The offline mode has several uses: You can manually modify the descriptor file before using it to make changes on the cluster. You can locally track the descriptor file of a service in a version control system. This enables you to reuse the descriptor file in places other than the target cluster, for example in continuous integration (CI) pipelines, development environments, or demos. You can examine the created descriptor files to learn about Knative services. In particular, you can see how the resulting service is influenced by the different arguments passed to the kn command. The offline mode has its advantages: it is fast, and does not require a connection to the cluster. However, offline mode lacks server-side validation. Consequently, you cannot, for example, verify that the service name is unique or that the specified image can be pulled. 1.2.2. Creating a service using offline mode You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. Procedure In offline mode, create a local Knative service descriptor file: USD kn service create showcase \ --image quay.io/openshift-knative/showcase \ --target ./ \ --namespace test Example output Service 'showcase' created in namespace 'test'. The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree. If you do not specify an existing directory, but use a filename, such as --target my-service.yaml , then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory. The filename can have the .yaml , .yml , or .json extension. Choosing .json creates the service descriptor file in the JSON format. The --namespace test option places the new service in the test namespace. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace. Examine the created directory structure: USD tree ./ Example output ./ └── test └── ksvc └── showcase.yaml 2 directories, 1 file The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace. The test/ directory contains the ksvc directory, named after the resource type. The ksvc directory contains the descriptor file showcase.yaml , named according to the specified service name. Examine the generated service descriptor file: USD cat test/ksvc/showcase.yaml Example output apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: showcase namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/showcase creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/showcase name: "" resources: {} status: {} List information about the new service: USD kn service describe showcase --target ./ --namespace test Example output Name: showcase Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories. Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml , .yml , and .json . The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory. Use the service descriptor file to create the service on the cluster: USD kn service create -f test/ksvc/showcase.yaml Example output Creating service 'showcase' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s ... 0.168s Configuration "showcase" is waiting for a Revision to become ready. 23.377s ... 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'showcase' created to latest revision 'showcase-00001' is available at URL: http://showcase-test.apps.example.com 1.3. kn container commands You can use the following commands to create and manage multiple containers in a Knative service spec. 1.3.1. Knative client multi-container support You can use the kn container add command to print YAML container spec to standard output. This command is useful for multi-container use cases because it can be used along with other standard kn flags to create definitions. The kn container add command accepts all container-related flags that are supported for use with the kn service create command. The kn container add command can also be chained by using UNIX pipes ( | ) to create multiple container definitions at once. Example commands Add a container from an image and print it to standard output: USD kn container add <container_name> --image <image_uri> Example command USD kn container add sidecar --image docker.io/example/sidecar Example output containers: - image: docker.io/example/sidecar name: sidecar resources: {} Chain two kn container add commands together, and then pass them to a kn service create command to create a Knative service with two containers: USD kn container add <first_container_name> --image <image_uri> | \ kn container add <second_container_name> --image <image_uri> | \ kn service create <service_name> --image <image_uri> --extra-containers - --extra-containers - specifies a special case where kn reads the pipe input instead of a YAML file. Example command USD kn container add sidecar --image docker.io/example/sidecar:first | \ kn container add second --image docker.io/example/sidecar:second | \ kn service create my-service --image docker.io/example/my-app:latest --extra-containers - The --extra-containers flag can also accept a path to a YAML file: USD kn service create <service_name> --image <image_uri> --extra-containers <filename> Example command USD kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml 1.4. kn domain commands You can use the following commands to create and manage domain mappings. 1.4.1. Creating a custom domain mapping by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a Knative service or route, and control a custom domain that you want to map to that CR. Note Your custom domain must point to the DNS of the OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Map a domain to a CR in the current namespace: USD kn domain create <domain_mapping_name> --ref <target_name> Example command USD kn domain create example.com --ref showcase The --ref flag specifies an Addressable target CR for domain mapping. If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. Map a domain to a Knative service in a specified namespace: USD kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace> Example command USD kn domain create example.com --ref ksvc:showcase:example-namespace Map a domain to a Knative route: USD kn domain create <domain_mapping_name> --ref <kroute:route_name> Example command USD kn domain create example.com --ref kroute:example-route 1.4.2. Managing custom domain mappings by using the Knative CLI After you have created a DomainMapping custom resource (CR), you can list existing CRs, view information about an existing CR, update CRs, or delete CRs by using the Knative ( kn ) CLI. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created at least one DomainMapping CR. You have installed the Knative ( kn ) CLI tool. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure List existing DomainMapping CRs: USD kn domain list -n <domain_mapping_namespace> View details of an existing DomainMapping CR: USD kn domain describe <domain_mapping_name> Update a DomainMapping CR to point to a new target: USD kn domain update --ref <target> Delete a DomainMapping CR: USD kn domain delete <domain_mapping_name>
[ "kn service create <service-name> --image <image> --tag <tag-value>", "kn service create showcase --image quay.io/openshift-knative/showcase", "Creating service 'showcase' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"showcase\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'showcase' created with latest revision 'showcase-00001' and URL: http://showcase-default.apps-crc.testing", "kn service update <service_name> --env <key>=<value>", "kn service update <service_name> --port 80", "kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m", "kn service update <service_name> --tag <revision_name>=latest", "kn service update <service_name> --untag testing --tag @latest=staging", "kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90", "kn service apply <service_name> --image <image>", "kn service apply <service_name> --image <image> --env <key>=<value>", "kn service apply <service_name> -f <filename>", "kn service describe --verbose <service_name>", "Name: showcase Namespace: default Age: 2m URL: http://showcase-default.apps.ocp.example.com Revisions: 100% @latest (showcase-00001) [1] (2m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m", "Name: showcase Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://showcase-default.apps.ocp.example.com Cluster: http://showcase.default.svc.cluster.local Revisions: 100% @latest (showcase-00001) [1] (3m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Env: GREET=Bonjour Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m", "kn service describe <service_name> -o yaml", "kn service describe <service_name> -o json", "kn service describe <service_name> -o url", "kn service create showcase --image quay.io/openshift-knative/showcase --target ./ --namespace test", "Service 'showcase' created in namespace 'test'.", "tree ./", "./ └── test └── ksvc └── showcase.yaml 2 directories, 1 file", "cat test/ksvc/showcase.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: showcase namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/showcase creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/showcase name: \"\" resources: {} status: {}", "kn service describe showcase --target ./ --namespace test", "Name: showcase Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON", "kn service create -f test/ksvc/showcase.yaml", "Creating service 'showcase' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"showcase\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'showcase' created to latest revision 'showcase-00001' is available at URL: http://showcase-test.apps.example.com", "kn container add <container_name> --image <image_uri>", "kn container add sidecar --image docker.io/example/sidecar", "containers: - image: docker.io/example/sidecar name: sidecar resources: {}", "kn container add <first_container_name> --image <image_uri> | kn container add <second_container_name> --image <image_uri> | kn service create <service_name> --image <image_uri> --extra-containers -", "kn container add sidecar --image docker.io/example/sidecar:first | kn container add second --image docker.io/example/sidecar:second | kn service create my-service --image docker.io/example/my-app:latest --extra-containers -", "kn service create <service_name> --image <image_uri> --extra-containers <filename>", "kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml", "kn domain create <domain_mapping_name> --ref <target_name>", "kn domain create example.com --ref showcase", "kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>", "kn domain create example.com --ref ksvc:showcase:example-namespace", "kn domain create <domain_mapping_name> --ref <kroute:route_name>", "kn domain create example.com --ref kroute:example-route", "kn domain list -n <domain_mapping_namespace>", "kn domain describe <domain_mapping_name>", "kn domain update --ref <target>", "kn domain delete <domain_mapping_name>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/knative_cli/knative-serving-cli-commands
Chapter 4. Deploying the overcloud for RHOSP dynamic routing
Chapter 4. Deploying the overcloud for RHOSP dynamic routing Use Red Hat OpenStack Platform (RHOSP) director to install and configure RHOSP dynamic routing in the overcloud. The high-level steps are: Define the overcloud networks for each leaf . Create a composable role- including the Free Range Routing (FRR) role- for each leaf and attach the composable network to each respective role . Create a unique NIC configuration for each role . Change the bridge mappings so that each leaf routes traffic through the specific bridge or VLAN on that leaf . Define virtual IPs (VIPs), if applicable, for your overcloud endpoints, and identify the subnet for each VIP . Provision your overcloud networks and overcloud VIPs . Register the bare metal nodes in your overcloud . Note Skip steps 7, 8, and 9 if you are using pre-provisioned bare metal nodes. Introspect the bare metal nodes in your overcloud . Provision bare metal nodes . Deploy Ceph in your dynamic routing environment . Deploy your overcloud using the configuration you set in the earlier steps . 4.1. Defining the leaf networks The Red Hat OpenStack Platform (RHOSP) director creates the overcloud leaf networks from a YAML-formatted, custom network definition file that you construct. This custom network definition file lists each composable network and its attributes and also defines the subnets needed for each leaf. Complete the following steps to create a YAML-formatted, custom network definition file that contains the specifications for your spine-leaf network on the overcloud. Later, the provisioning process creates a heat environment file from your network definition file that you include when you deploy your RHOSP overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Create a templates directory under /home/stack : Copy the default network definition template, routed-networks.yaml , to your custom templates directory: Example Edit your copy of the network definition template to define each base network and each of the associated leaf subnets as a composable network item. Tip For information, see Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide. Example The following example demonstrates how to define the Internal API network and its leaf networks: Note You do not define the Control Plane networks in your custom network definition template, because the undercloud has already created these networks. However, you must set the parameters manually so that the overcloud can configure the NICs accordingly. For more information, see Deploying the undercloud for RHOSP dynamic routing . Note RHOSP does not perform automatic validation of the network subnet and allocation_pools values. Ensure that you define these values consistently and that they do not conflict with existing networks. Note Add the vip parameter and set the value to true for the networks that host the Controller-based services. In this example, the InternalApi network contains these services. steps Note the path and file name of the custom network definition file that you have created. You need this information later when you provision your networks for the RHOSP overcloud. Proceed to the step Defining leaf roles and attaching networks . Additional resources Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide 4.2. Defining leaf roles and attaching networks The Red Hat OpenStack Platform (RHOSP) director creates a composable role for each leaf and attaches the composable network to each respective role from a roles template that you construct. Start by copying the default Controller, Compute, and Ceph Storage roles from the director core templates, and modifying these to meet your environment's needs. After you have created all of the individual roles, you run the openstack overcloud roles generate command to concatenate them into one large custom roles data file. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Copy the default roles for Controller, Compute, and Ceph Storage roles that ship with RHOSP to the home directory of the stack user. Rename the files to indicate that they are leaf 0: Copy the leaf 0 files to create your leaf 1 and leaf 2 files: Edit the parameters in each file to align with their respective leaf parameters. Tip For information about the various parameters in a roles data template, see Examining role parameters in the Customizing your Red Hat OpenStack Platform deployment guide. Example - ComputeLeaf0 Example - CephStorageLeaf0 Edit the network parameter in the leaf 1 and leaf 2 files so that they align with the respective leaf network parameters. Example - ComputeLeaf1 Example - CephStorageLeaf1 Note This applies only to leaf 1 and leaf 2. The network parameter for leaf 0 retains the base subnet values, which are the lowercase names of each subnet combined with a _subnet suffix. For example, the Internal API for leaf 0 is internal_api_subnet . In each Controller, Compute, and (if present) Networker role file, add the OVN BGP agent to the list of services under the ServicesDefault parameter: Example When your role configuration is complete, run the overcloud roles generate command to generate the full roles data file. Example This creates one custom roles data file that includes all of the custom roles for each respective leaf network. steps Note the path and file name of the custom roles data file created by the overcloud roles generate command. You use this path later when you deploy your overcloud. Proceed to the step Creating a custom NIC configuration for leaf roles . Additional resources Examining role parameters in the Customizing your Red Hat OpenStack Platform deployment guide 4.3. Creating a custom NIC configuration for leaf roles Each role that the Red Hat OpenStack Platform (RHOSP) director creates requires a unique NIC configuration. Complete the following steps to create a custom set of NIC templates and a custom environment file that maps the custom templates to the respective role. Prerequisites Access to the undercloud host and credentials for the stack user. You have a custom network definition file. You have a custom roles data file. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Copy the content from one of the default NIC templates to create a custom template for your NIC configuration. Example In this example, the single-nic-vlans NIC template is copied to use for a custom template for your NIC configuration: In each of the NIC templates that you created in the earlier step, change the NIC configuration to match the specifics for your spine-leaf topology. Example Tip For more information, see Defining custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. Create a custom environment file, such as spine-leaf-nic-roles-map.yaml , that contains a parameter_defaults section that maps the custom NIC templates to each custom role. Example steps Note the path and file name of your custom NIC templates and the custom environment file that maps the custom NIC templates to each custom role. You use this path later when you deploy your overcloud. Proceed to the step Configuring the leaf networks . Additional resources Defining custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide 4.4. Configuring the leaf networks In a spine leaf architecture, each leaf routes traffic through the specific bridge or VLAN on that leaf, which is often the case with edge computing scenarios. So, you must change the default mappings where the Red Hat OpenStack Platform (RHOSP) Controller and Compute network configurations use an OVS provider bridge ( br-ex ). The RHOSP director creates the control plane network during undercloud creation. However, the overcloud requires access to the control plane for each leaf. To enable this access, you must define additional parameters in your deployment. You must set some basic FRRouting and OVN BGP agent configurations. Complete the following steps to create a custom network environment file that contains the separate network mappings and sets access to the control plane networks for the overcloud. Prerequisites You must be the stack user with access to the RHOSP undercloud. The undercloud is installed. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: In a new custom environment file, such as spine-leaf-ctlplane.yaml , create a parameter_defaults section and set the NeutronBridgeMappings parameter for each leaf that uses the default OVS provider bridge ( br-ex ). Important The name of the custom environment file that you create to contain your network definition must end in either .yaml or .template . Example Tip For more information, see Chapter 17. Networking (neutron) Parameters in the Overcloud parameters guide. For VLAN network mappings, add vlan to NeutronNetworkType , and by using NeutronNetworkVLANRanges , map VLANs for the leaf networks: Example Note You can use both flat networks and VLANs in your spine-leaf topology. Add the control plane subnet mapping for each spine-leaf network by using the <role>ControlPlaneSubnet parameter: Example Set the OVN BGP agent, FRRouting, and CMS options for each leaf. Note The FRR service runs on all the RHOSP nodes to provide connectivity between the control plane and services running on different nodes across the data plane. However, you must run the OVN BGP agent only on all Compute nodes and on nodes configured with enable-chassis-as-gw . For RHOSP nodes where no data plane routes are exposed, disable the OVN BGP agent for these roles by setting the tripleo_frr_ovn_bgp_agent_enable parameter to false . The default is true . Example Example Tip For more information, see Chapter 17. Networking (neutron) Parameters in the Overcloud parameters guide. OVNCMSOptions The CMS options to configure in OVSDB. FrrOvnBgpAgentReconcileInterval Defines how frequently to check the status, to ensure that only the correct IPs are exposed on the correct locations. Default: 120. FrrOvnBgpAgentOvsdbConnection The connection string for the native OVSDB backend. Use tcp:<IP_address>:<port> for TCP connection. Default: tcp:127.0.0.1:6640 . FrrOvnBgpAgentExposeTenantNetworks Exposes VM IPs on tenant networks via MP-BGP IPv4 and IPv6 unicast. Requires the BGP driver (see THT parameter FrrOvnBgpAgentDriver). Default: false. FrrOvnBgpAgentDriver Configures how VM IPs are advertised via BGP. EVPN driver exposes VM IPs on provider networks and FIPs associated to VMs on tenant networks via MP-BGP IPv4 and IPv6 unicast. BGP driver exposes VM IPs on the tenant networks via MP-BGP EVPN VXLAN. Default: ovn_evpn_driver . FrrOvnBgpAgentAsn Autonomous system number (ASN) to be used by the agent when running in BGP mode. Default: 64999. FrrOvnBgpAgentAsn can be set to a different value for each role that is used. FrrLogLevel Log level. Default: informational . FrrBgpAsn Default ASN to be used within FRR. Default: 65000. FrrBgpAsn can be set to a different value for each role that is used. Enable the graceful restart option for the BGP and zebra daemons by adding the following values: Replace <ROLENAME> with each role using FRR that you want to modify, for example, ControllerRack1 , ComputeLeaf1 , and so on. For more information, see Graceful Restart and Invoking zebra . To use dynamic routing for the different OpenStack services, add the following configuration: steps Note the path and file name of the custom network environment file that you have created. You need this path later when you deploy your overcloud. Proceed to the step Setting the subnet for virtual IP addresses . Additional resources Chapter 17. Networking (neutron) Parameters in the Overcloud parameters guide 4.5. Setting the subnet for virtual IP addresses By default, the Red Hat Openstack Platform (RHOSP) Controller role hosts virtual IP (VIP) addresses for each network. The RHOSP overcloud takes the VIPs from the base subnet of each network except for the control plane. The control plane uses ctlplane-subnet , which is the default subnet name created during a standard undercloud installation. In the spine-leaf examples used in this document, the default base provisioning network is leaf0 instead of ctlplane-subnet . This means that you must add the value pair subnet: leaf0 to the network:ctlplane parameter to map the subnet to leaf0 . Complete the following steps to create a YAML-formatted, custom network VIP definition file that contains the overrides for your VIPs on the overcloud. Later, the provisioning process creates a heat environment file from your network VIP definition file that you include when you deploy your RHOSP overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: In a new custom network VIP definition template, such as spine-leaf-vip-data.yaml , list the virtual IP addresses that need to be created on the specific subnet used by controller nodes. Example You can use the following parameters in your spine-leaf-vip-data.yaml file: network Sets the neutron network name. This is the only required parameter. ip_address Sets the IP address of the VIP. subnet Sets the neutron subnet name. Use to specify the subnet when creating the virtual IP neutron port. This parameter is required when your deployment uses routed networks. dns_name Sets the FQDN (Fully Qualified Domain Name). name Sets the virtual IP name. Tip For more information, see Adding a composable network in the Installing and managing Red Hat OpenStack Platform with director guide. steps Note the path and file name of the custom network VIP definition template that you have created. You use this path later when you provision your network VIPs for the RHOSP overcloud. Proceed to the step Provisioning networks and VIPs for the overcloud . 4.6. Provisioning networks and VIPs for the overcloud The Red Hat OpenStack Platform (RHOSP) provisioning process uses your network definition file to create a new heat environment file that contains your network specifications. If your deployment uses VIPs, RHOSP creates a new heat environment file from your VIP definition file. After you provision your networks and VIPs, you have two heat environment files that you use later to deploy your overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. You have a network configuration template. If you are using VIPs, you have a VIP definition template. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Provision your overcloud networks. Use the overcloud network provision command, and provide the path to the network definition file that you created earlier. Tip For more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. Example In this example, the path is /home/stack/templates/spine-leaf-networks-data.yaml . Use the --output argument to name the file created by the command. Important The name of the output file that you specify must end in either .yaml or .template . Provision your overcloud VIPs. Use the overcloud network vip provision command, with the --stack argument to name the VIP definition file that you created earlier. Use the --output argument to name the file created by the command. Tip For more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Important The name of the output file that you specify must end in either .yaml or .template . Note the path and file names of the generated output files. You use this information later when you deploy your overcloud. Verification You can use the following commands to confirm that the command created your overcloud networks and subnets: Replace <network>, <subnet>, and <port> with the name or UUID of the network, subnet, and port that you want to check. steps If you are using pre-provisioned nodes, skip to Running the overcloud deployment command . Otherwise, proceed to the step Registering bare metal nodes on the overcloud . Additional resources Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide overcloud network provision in the Command line interface reference overcloud network vip provision in the Command line interface reference 4.7. Registering bare metal nodes on the overcloud Red Hat OpenStack Platform (RHOSP) director requires a custom node definition template that specifies the hardware and power management details of your physical machines. You can create this template in JSON or YAML formats. After you register your physical machines as bare metal nodes, you introspect them, and then you finally provision them. Note If you are using pre-provisioned bare metal nodes then you can skip registering, introspecting, and provisioning bare metal nodes, and go to Deploying a spine-leaf enabled overcloud . Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Create a new node definition template, such as barematal-nodes.yaml . Add a list of your physical machines that includes their hardware and power management details. Example Tip For more information about template parameter values and for a JSON example, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Verify the template formatting and syntax. Example Correct any errors and save your node definition template. Import your node definition template to RHOSP director to register each node from your template into director: Example Verification When the node registration and configuration is complete, confirm that director has successfully registered the nodes: The baremetal node list command should include the imported nodes and the status should be manageable . steps Proceed to the step, Introspecting bare metal nodes on the overcloud . Additional resources Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. overcloud node import in the Command line interface reference 4.8. Introspecting bare metal nodes on the overcloud After you register a physical machine as a bare metal node, you can use OpenStack Platform (RHOSP) director introspection to automatically add then node's hardware details and create ports for each of its Ethernet MAC addresses. After you perform introspection on your bare metal nodes, the final step is to provision them. Note If you are using pre-provisioned bare metal nodes then you can skip introspecting and introspecting bare metal nodes and go to Deploying a spine-leaf enabled overcloud . Prerequisites Access to the undercloud host and credentials for the stack user. You have registered your bare metal nodes for your overcloud with RHOSP. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the pre-introspection validation group to check the introspection requirements: Review the results of the validation report. (Optional) Review detailed output from a specific validation: Replace <UUID> with the UUID of the specific validation from the report that you want to review. Important A FAILED validation does not prevent you from deploying or running RHOSP. However, a FAILED validation can indicate a potential issue with a production environment. Inspect the hardware attributes of all nodes: Tip For more information, see Using director introspection to collect bare metal node hardware information in the Installing and managing Red Hat OpenStack Platform with director guide. Monitor the introspection progress logs in a separate terminal window: Verification After the introspection completes, all nodes change to an available state. steps Proceed to the step, Provisioning bare metal nodes for the overcloud . Additional resources Using director introspection to collect bare metal node hardware information in the Installing and managing Red Hat OpenStack Platform with director guide overcloud node introspect in the Command line interface reference 4.9. Provisioning bare metal nodes for the overcloud To provision your bare metal nodes for Red Hat OpenStack Platform (RHOSP), you define the quantity and attributes of the bare metal nodes that you want to deploy and assign overcloud roles to these nodes. You also define the network layout of the nodes. You add all of this information in a node definition file in YAML format. The provisioning process creates a heat environment file from your node definition file. This heat environment file contains the node specifications you configured in your node definition file, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this heat environment file in the deployment command. The provisioning process also provisions the port resources for all networks defined for each node or role in the node definition file. Note If you are using pre-provisioned bare metal nodes then you can skip provisioning bare metal nodes and go to Deploying a spine-leaf enabled overcloud . Prerequisites Access to the undercloud host and credentials for the stack user. The bare metal nodes are registered, introspected, and available for provisioning and deployment. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Create a bare metal nodes definition file, such as spine-leaf-baremetal-nodes.yaml , and define the node count for each role that you want to provision. Example Tip For more information about the properties that you can set bare metal node definition file, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Provision the overcloud bare metal nodes, using the overcloud node provision command. Example Important The name of the output file that you specify must end in either .yaml or .template . Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : Use the metalsmith tool to obtain a unified view of your nodes, including allocations and ports: Note the path and file name of the generated output file. You need this path later when you deploy your overcloud. Verification Confirm the association of nodes to hostnames: steps Proceed to the step Deploying Ceph in your dynamic routing environment . Additional resources Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide 4.10. Deploying Ceph in your dynamic routing environment With some adjustments to your normal configuration, you can deploy Red Hat Ceph Storage in a Red Hat OpenStack Platform (RHOSP) environment that uses dynamic routing. Note When you install Red Hat Ceph Storage in a Red Hat OpenStack Platform environment that uses dynamic routing, you must install Ceph before you deploy the overcloud. In the following example configuration, you use the provisioning network to deploy Ceph. For optimal performance, we recommend that you use dedicated NICs and network hardware when you deploy Ceph Storage in a RHOSP dynamic routing environment. Procedure Follow the instructions for how to install Red Hat Ceph Storage in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . At the step where you configure the Ceph Storage cluster, follow these additional steps: Create the Ceph configuration file and add a section named [global] . Example In this example, the Ceph configuration file is named, initial-ceph.conf : In the [global] section, include the public_network and cluster_network parameters, and add to each, the subnet CIDRs that are listed in the undercloud.conf file. Only include the subnet CIDRs that correspond to the overcloud nodes. Tip The subnet CIDRs to add are the ones that are described in, Installing and configuring the undercloud for RHOSP dynamic routing . Example In this example, the subnets in the undercloud.conf file that correspond to the overcloud nodes are added to the public_network and the cluster_network parameters: When you are ready to deploy Ceph, ensure that you include initial-ceph.conf in the overcloud ceph deploy command. Example Important Dynamic routing is not available yet. Therefore, if Ceph nodes require routing to reach NTP servers, then NTP configuration for Ceph nodes might be delayed. If your site uses NTP servers, add --skip-ntp to the openstack overcloud ceph deploy command. Do not put the Ceph cluster into production until the BGP routes are established so that Ceph can reach the NTP servers that it requires. Until the overcloud is deployed and NTP configured, a Ceph cluster without NTP can lead to a number of anomalies such as daemons ignoring received messages, outdated timestamps, and timeouts triggered too soon or too late when a message isn't received in time. Note the path and file name of the generated output file from running the overcloud ceph deploy command. In this example, /home/stack/templates/overcloud-baremetal-deployed.yaml . You need this information later when you deploy your overcloud. steps Proceed to the step Deploying a spine-leaf enabled overcloud . Additional resources Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director 4.11. Deploying a spine-leaf enabled overcloud The last step in deploying your Red Hat OpenStack Platform (RHOSP) overcloud is to run the overcloud deploy command. Inputs to the command include all of the various overcloud templates and environment files that you constructed. RHOSP director uses these templates and files as a plan for how to install and configure your overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the overcloud deploy command. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Collate the custom environment files and custom templates that you need for your overcloud environment. This list includes the unedited heat template files provided with your director installation and the custom files you created. Ensure that you have the paths to the following files: Your custom network definition file that contains the specifications for your spine-leaf network on the overcloud, for example, spine-leaf-networks-data.yaml . For more information, see Defining the leaf networks . Your custom roles data file that defines a role for each leaf. Example: spine-leaf-roles.yaml . For more information, see Defining leaf roles and attaching networks Your custom environment file that contains the roles and the custom NIC template mappings for each role. Example: spine-leaf-nic-roles-map.yaml . For more information, see Creating a custom NIC configuration for leaf roles . Your custom network environment file that contains the separate network mappings and sets access to the control plane networks for the overcloud. Example: spine-leaf-ctlplane.yaml For more information, see Configuring the leaf networks . The output file from provisioning your overcloud networks. Example: spine-leaf-networks-provisioned.yaml For more information, see Provisioning networks and VIPs for the overcloud . The output file from provisioning your overcloud VIPs. Example: spine-leaf-vips-provisioned.yaml For more information, see Provisioning networks and VIPs for the overcloud . If you are not using pre-provisioned nodes, the output file from provisioning bare-metal nodes. Example: spine-leaf-baremetal-nodes-provisioned.yaml . For more information, see Provisioning bare metal nodes for the overcloud . The output file from the overcloud ceph deploy command. Example: overcloud-baremetal-deployed.yaml . For more information, see Deploying Ceph in your dynamic routing environment . Any other custom environment files. Enter the overcloud deploy command by carefully ordering the custom environment files and custom templates that are inputs to the command. The general rule is to specify any unedited heat template files first, followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties. Follow this order for listing the inputs to the overcloud deploy command: Include your custom environment file that contains your custom NIC templates mapped to each role. Example: spine-leaf-nic-roles-map.yaml , after network-environment.yaml . The network-environment.yaml file provides the default network configuration for composable network parameters, that your mapping file overrides. Note that the director renders this file from the network-environment.j2.yaml Jinja2 template. If you created any other spine leaf network environment files, include these environment files after the roles-NIC templates mapping file. Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration. Example This excerpt from a sample overcloud deploy command demonstrates the proper ordering of the command's inputs: Tip For more information, see Creating your overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Run the overcloud deploy command. When the overcloud creation is finished, the RHOSP director provides details to help you access your overcloud. Verification Perform the steps in Validating your overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide. Additional resources Creating your overcloud in the Installing and managing Red Hat OpenStack Platform with director guide overcloud deploy in the Command line interface reference
[ "source ~/stackrc", "mkdir /home/stack/templates", "cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/ routed-networks.yaml /home/stack/templates/spine-leaf-networks-data.yaml", "- name: InternalApi name_lower: internal_api vip: true mtu: 1500 subnets: internal_api_subnet: ip_subnet: 172.16.32.0/24 gateway_ip: 172.16.32.1 allocation_pools: - start: 172.16.32.4 end: 172.16.32.250 vlan: 20 internal_api_leaf1_subnet: ip_subnet: 172.16.33.0/24 gateway_ip: 172.16.33.1 allocation_pools: - start: 172.16.33.4 end: 172.16.33.250 vlan: 30 internal_api_leaf2_subnet: ip_subnet: 172.16.34.0/24 gateway_ip: 172.16.34.1 allocation_pools: - start: 172.16.34.4 end: 172.16.34.250 vlan: 40", "source ~/stackrc", "cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller0.yaml cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml", "cp ~/roles/Controller0.yaml ~/roles/Controller1.yaml cp ~/roles/Controller0.yaml ~/roles/Controller2.yaml cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml", "- name: ComputeLeaf0 HostnameFormatDefault: '%stackname%-compute-leaf0-%index%'", "- name: CephStorageLeaf0 HostnameFormatDefault: '%stackname%-cephstorage-leaf0-%index%'", "- name: ComputeLeaf1 networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1", "- name: CephStorageLeaf1 networks: Storage: subnet: storage_leaf1 StorageMgmt: subnet: storage_mgmt_leaf1", "- name: ControllerRack1 ServicesDefault: - OS::TripleO::Services::Frr - OS::TripleO::Services::OVNBgpAgent", "openstack overcloud roles generate --roles-path ~/roles -o spine-leaf-roles-data.yaml Controller Compute Compute1 Compute2 CephStorage CephStorage1 CephStorage2", "source ~/stackrc", "cp -r /usr/share/ansible/roles/tripleo_network_config/ templates/single-nic-vlans/* /home/stack/templates/spine-leaf-nics/.", "{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}", "parameter_defaults: %%ROLE%%NetworkConfigTemplate: <path_to_ansible_jinja2_nic_config_file>", "parameter_defaults: Controller0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2'", "source ~/stackrc", "parameter_defaults: NeutronFlatNetworks: provider1 ControllerRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControllerRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControllerRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"]", "parameter_defaults: NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'provider2:1:1000' ControllerRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControllerRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControllerRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"]", "parameter_defaults: NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'provider2:1:1000' ControllerRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r1 ControllerRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r2 ControllerRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r3 ComputeRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ComputeRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"]", "parameter_defaults: DatabaseRack1ExtraGroupVars: tripleo_frr_ovn_bgp_agent_enable: false", "parameter_defaults: NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'provider2:1:1000' ControllerRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r1 FrrOvnBgpAgentDriver: 'ovn_bgp_driver' FrrOvnBgpAgentExposeTenantNetworks: True OVNCMSOptions: \"enable-chassis-as-gw\" ControllerRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r2 FrrOvnBgpAgentDriver: 'ovn_bgp_driver' FrrOvnBgpAgentExposeTenantNetworks: True OVNCMSOptions: \"enable-chassis-as-gw\" ControllerRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] ControlPlaneSubnet: r3 FrrOvnBgpAgentDriver: 'ovn_bgp_driver' FrrOvnBgpAgentExposeTenantNetworks: True OVNCMSOptions: \"enable-chassis-as-gw\" ComputeRack1Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] FrrOvnBgpAgentDriver: 'ovn_bgp_driver' ComputeRack2Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] FrrOvnBgpAgentDriver: 'ovn_bgp_driver' ComputeRack3Parameters: NeutronBridgeMappings: [\"provider1:br-ex\", \"provider2:br-vlan\"] FrrOvnBgpAgentDriver: 'ovn_bgp_driver'", "<ROLENAME>ExtraGroupVars: tripleo_frr_conf_custom_router_bgp: | bgp graceful-restart bgp graceful-restart notification bgp graceful-restart restart-time 60 bgp graceful-restart preserve-fw-state tripleo_frr_zebra_graceful_restart_time: 30", "parameter_defaults: ServiceNetMap: ApacheNetwork: bgp_network NeutronTenantNetwork: bgp_network AodhApiNetwork: bgp_network PankoApiNetwork: bgp_network BarbicanApiNetwork: bgp_network GnocchiApiNetwork: bgp_network MongodbNetwork: bgp_network CinderApiNetwork: bgp_network CinderIscsiNetwork: bgp_network GlanceApiNetwork: bgp_network GlanceApiEdgeNetwork: bgp_network GlanceApiInternalNetwork: bgp_network IronicApiNetwork: bgp_network IronicNetwork: bgp_network IronicInspectorNetwork: bgp_network KeystoneAdminApiNetwork: bgp_network KeystonePublicApiNetwork: bgp_network ManilaApiNetwork: bgp_network NeutronApiNetwork: bgp_network OctaviaApiNetwork: bgp_network HeatApiNetwork: bgp_network HeatApiCfnNetwork: bgp_network HeatApiCloudwatchNetwork: bgp_network NovaApiNetwork: bgp_network PlacementNetwork: bgp_network NovaMetadataNetwork: bgp_network NovaVncProxyNetwork: bgp_network NovaLibvirtNetwork: bgp_network NovajoinNetwork: bgp_network SwiftStorageNetwork: bgp_network SwiftProxyNetwork: bgp_network HorizonNetwork: bgp_network MemcachedNetwork: bgp_network OsloMessagingRpcNetwork: bgp_network OsloMessagingNotifyNetwork: bgp_network RabbitmqNetwork: bgp_network QdrNetwork: bgp_network RedisNetwork: bgp_network GaneshaNetwork: bgp_network MysqlNetwork: bgp_network SnmpdNetwork: bgp_network CephClusterNetwork: bgp_network CephDashboardNetwork: bgp_network CephGrafanaNetwork: bgp_network CephMonNetwork: bgp_network CephRgwNetwork: bgp_network PublicNetwork: bgp_network OpendaylightApiNetwork: bgp_network OvnDbsNetwork: bgp_network DockerRegistryNetwork: ctlplane PacemakerNetwork: bgp_network PacemakerRemoteNetwork: bgp_network DesignateApiNetwork: bgp_network BINDNetwork: bgp_network EtcdNetwork: bgp_network HaproxyNetwork: bgp_network", "source ~/stackrc", "- network: storage_mgmt subnet: storage_mgmt_subnet_leaf1 - network: internal_api subnet: internal_api_subnet_leaf1 - network: storage subnet: storage_subnet_leaf1 - network: external subnet: external_subnet_leaf1 ip_address: 172.20.11.50 - network: ctlplane subnet: leaf0 - network: oc_provisioning subnet: oc_provisioning_subnet_leaf1 - network: storage_nfs subnet: storage_nfs_subnet_leaf1", "source ~/stackrc", "openstack overcloud network provision --output spine-leaf-networks-provisioned.yaml /home/stack/templates/spine-leaf-networks-data.yaml", "openstack overcloud network vip provision --stack spine-leaf-overcloud --output spine-leaf-vips-provisioned.yaml /home/stack/templates/spine-leaf-vip-data.yaml", "openstack network list openstack subnet list openstack network show <network> openstack subnet show <subnet> openstack port list openstack port show <port>", "source ~/stackrc", "nodes: - name: \"node01\" ports: - address: \"aa:aa:aa:aa:aa:aa\" physical_network: ctlplane local_link_connection: switch_id: \"52:54:00:00:00:00\" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.205\" - name: \"node02\" ports: - address: \"bb:bb:bb:bb:bb:bb\" physical_network: ctlplane local_link_connection: switch_id: \"52:54:00:00:00:00\" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.206\"", "openstack overcloud node import --validate-only ~/templates/ baremetal-nodes.yaml", "openstack overcloud node import ~/baremetal-nodes.yaml", "openstack baremetal node list", "source ~/stackrc", "validation run --group pre-introspection", "validation history get --full <UUID>", "openstack overcloud node introspect --all-manageable --provide", "sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log", "source ~/stackrc", "- name: ControllerRack1 count: 1 hostname_format: ctrl-1-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r1.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network1 - network: main_network - network: main_network_ipv6 instances: - hostname: ctrl-1-0 name: ctrl-1-0 capabilities: node: ctrl-1-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.1.2 subnet: left_network_r1 - network: right_network1 fixed_ip: 100.64.0.2 subnet: right_network1_sub - network: main_network fixed_ip: 172.30.1.1 subnet: main_network_r1 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0001 subnet: main_network_ipv6_r1 - name: ComputeRack1 count: 2 hostname_format: cmp-1-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r1.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network1 - network: main_network - network: main_network_ipv6 instances: - hostname: cmp-1-0 name: cmp-1-0 capabilities: node: cmp-1-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.1.6 subnet: left_network_r1 - network: right_network1 fixed_ip: 100.64.0.6 subnet: right_network1_sub - network: main_network fixed_ip: 172.30.1.2 subnet: main_network_r1 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0004 subnet: main_network_ipv6_r1 - hostname: cmp-1-1 name: cmp-1-1 capabilities: node: cmp-1-1 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.1.10 subnet: left_network_r1 - network: right_network1 fixed_ip: 100.64.0.10 subnet: right_network1_sub - network: main_network fixed_ip: 172.30.1.3 subnet: main_network_r1 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0005 subnet: main_network_ipv6_r1 - name: ControllerRack2 count: 1 hostname_format: ctrl-2-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r2.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network2 - network: main_network - network: main_network_ipv6 instances: - hostname: ctrl-2-0 name: ctrl-2-0 capabilities: node: ctrl-2-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.2.2 subnet: left_network_r2 - network: right_network2 fixed_ip: 100.64.0.2 subnet: right_network2_sub - network: main_network fixed_ip: 172.30.2.1 subnet: main_network_r2 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0002 subnet: main_network_ipv6_r1 - name: ComputeRack2 count: 2 hostname_format: cmp-2-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r2.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network2 - network: main_network - network: main_network_ipv6 instances: - hostname: cmp-2-0 name: cmp-2-0 capabilities: node: cmp-2-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.2.6 subnet: left_network_r2 - network: right_network2 fixed_ip: 100.64.0.6 subnet: right_network2_sub - network: main_network fixed_ip: 172.30.2.2 subnet: main_network_r2 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0006 subnet: main_network_ipv6_r1 - hostname: cmp-2-1 name: cmp-2-1 capabilities: node: cmp-2-1 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.2.10 subnet: left_network_r2 - network: right_network2 fixed_ip: 100.64.0.10 subnet: right_network2_sub - network: main_network fixed_ip: 172.30.2.3 subnet: main_network_r2 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0007 subnet: main_network_ipv6_r1 - name: ControllerRack3 count: 1 hostname_format: ctrl-3-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r3.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network3 - network: main_network - network: main_network_ipv6 instances: - hostname: ctrl-3-0 name: ctrl-3-0 capabilities: node: ctrl-3-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.3.2 subnet: left_network_r3 - network: right_network3 fixed_ip: 100.64.0.2 subnet: right_network3_sub - network: main_network fixed_ip: 172.30.3.1 subnet: main_network_r3 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0003 subnet: main_network_ipv6_r1 - name: ComputeRack3 count: 2 hostname_format: cmp-3-%index% defaults: network_config: default_route_network: - ctlplane template: /home/stack/tht/nics_r3.yaml networks: - network: ctlplane vif: true - network: left_network - network: right_network3 - network: main_network - network: main_network_ipv6 instances: - hostname: cmp-3-0 name: cmp-3-0 capabilities: node: cmp-3-0 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.3.6 subnet: left_network_r3 - network: right_network3 fixed_ip: 100.64.0.6 subnet: right_network3_sub - network: main_network fixed_ip: 172.30.3.2 subnet: main_network_r3 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0008 subnet: main_network_ipv6_r1 - hostname: cmp-3-1 name: cmp-3-1 capabilities: node: cmp-3-1 networks: - network: ctlplane vif: true - network: left_network fixed_ip: 100.65.3.10 subnet: left_networ10_r3 - network: right_network3 fixed_ip: 100.64.0.10 subnet: right_network3_sub - network: main_network fixed_ip: 172.30.3.3 subnet: main_network_r3 - network: main_network_ipv6 fixed_ip: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0009 subnet: main_network_ipv6_r1", "openstack overcloud node provision --stack spine_leaf_overcloud --network-config --output spine-leaf-baremetal-nodes-provisioned.yaml /home/stack/templates/spine-leaf-baremetal-nodes.yaml", "watch openstack baremetal node list", "metalsmith list", "openstack baremetal allocation list", "echo \"[global]\" > initial-ceph.conf", "[global] public_network=\"192.168.1.0/24,192.168.2.0/24,192.168.3.0/24\" cluster_network=\"192.168.1.0/24,192.168.2.0/24,192.168.3.0/24\"", "openstack overcloud ceph deploy --config initial-ceph.conf <other_arguments> /home/stack/templates/overcloud-baremetal-deployed.yaml", "source ~/stackrc", "openstack overcloud deploy --templates -n /home/stack/templates/spine-leaf-networks-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/frr.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ovn-bgp-agent.yaml -e /home/stack/templates/spine-leaf-nic-roles-map.yaml -e /home/stack/templates/spine-leaf-ctlplane.yaml -e /home/stack/templates/spine-leaf-baremetal-provisioned.yaml -e /home/stack/templates/spine-leaf-networks-provisioned.yaml -e /home/stack/templates/spine-leaf-vips-provisioned.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/inject-trust-anchor-hiera.yaml -r /home/stack/templates/spine-leaf-roles-data.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dynamic_routing_in_red_hat_openstack_platform/deploy-overcloud-rhosp-dynamic-routing_rhosp-bgp
20.34. Listing Volume Information
20.34. Listing Volume Information The virsh vol-info vol command lists basic information about the given storage volume. You must supply either the storage volume name, key, or path. The command also accepts the option --pool , where you can specify the storage pool that is associated with the storage volume. You can either supply the pool name, or the UUID. Example 20.94. How to view information about a storage volume The following example retrieves information about the storage volume named vol-new . When you run this command you should change the name of the storage volume to the name of your storage volume: The virsh vol-list pool command lists all of volumes that are associated to a given storage pool. This command requires a name or UUID of the storage pool. The --details option instructs virsh to additionally display volume type and capacity related information where available. Example 20.95. How to display the storage pools that are associated with a storage volume The following example lists all storage volumes that are associated with the storage pool vdisk :
[ "virsh vol-info vol-new", "virsh vol-list vdisk" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Storage_Volume_Commands-Listing_volume_information
C.9. Glock Statistics
C.9. Glock Statistics GFS2 maintains statistics that can help track what is going on within the file system. This allows you to spot performance issues. GFS2 maintains two counters: dcount , which counts the number of DLM operations requested. This shows how much data has gone into the mean/variance calculations. qcount , which counts the number of syscall level operations requested. Generally qcount will be equal to or greater than dcount . In addition, GFS2 maintains three mean/variance pairs. The mean/variance pairs are smoothed exponential estimates and the algorithm used is the one used to calculate round trip times in network code. The mean and variance pairs maintained in GFS2 are not scaled, but are in units of integer nanoseconds. srtt/srttvar: Smoothed round trip time for non-blocking operations srttb/srttvarb: Smoothed round trip time for blocking operations irtt/irttvar: Inter-request time (for example, time between DLM requests) A non-blocking request is one which will complete right away, whatever the state of the DLM lock in question. That currently means any requests when (a) the current state of the lock is exclusive (b) the requested state is either null or unlocked or (c) the "try lock" flag is set. A blocking request covers all the other lock requests. Larger times are better for IRTTs, whereas smaller times are better for the RTTs. Statistics are kept in two sysfs files: The glstats file. This file is similar to the glocks file, except that it contains statistics, with one glock per line. The data is initialized from "per cpu" data for that glock type for which the glock is created (aside from counters, which are zeroed). This file may be very large. The lkstats file. This contains "per cpu" stats for each glock type. It contains one statistic per line, in which each column is a cpu core. There are eight lines per glock type, with types following on from each other.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ap-glockstats-gfs2
2.2.4.3. Beware of Syntax Errors
2.2.4.3. Beware of Syntax Errors The NFS server determines which file systems to export and which hosts to export these directories to by consulting the /etc/exports file. Be careful not to add extraneous spaces when editing this file. For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host bob.example.com with read/write permissions. The following line in the /etc/exports file, on the other hand, shares the same directory to the host bob.example.com with read-only permissions and shares it to the world with read/write permissions due to a single space character after the hostname. It is good practice to check any configured NFS shares by using the showmount command to verify what is being shared: showmount -e <hostname>
[ "/tmp/nfs/ bob.example.com(rw)", "/tmp/nfs/ bob.example.com (rw)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_nfs-beware_of_syntax_errors
Chapter 7. PrometheusRule [monitoring.coreos.com/v1]
Chapter 7. PrometheusRule [monitoring.coreos.com/v1] Description PrometheusRule defines recording and alerting rules for a Prometheus instance Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired alerting rule definitions for Prometheus. 7.1.1. .spec Description Specification of desired alerting rule definitions for Prometheus. Type object Property Type Description groups array Content of Prometheus rule file groups[] object RuleGroup is a list of sequentially evaluated recording and alerting rules. Note: PartialResponseStrategy is only used by ThanosRuler and will be ignored by Prometheus instances. Valid values for this field are 'warn' or 'abort'. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response 7.1.2. .spec.groups Description Content of Prometheus rule file Type array 7.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated recording and alerting rules. Note: PartialResponseStrategy is only used by ThanosRuler and will be ignored by Prometheus instances. Valid values for this field are 'warn' or 'abort'. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response Type object Required name rules Property Type Description interval string name string partial_response_strategy string rules array rules[] object Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule 7.1.4. .spec.groups[].rules Description Type array 7.1.5. .spec.groups[].rules[] Description Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule Type object Required expr Property Type Description alert string annotations object (string) expr integer-or-string for string labels object (string) record string 7.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheusrules GET : list objects of kind PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules DELETE : delete collection of PrometheusRule GET : list objects of kind PrometheusRule POST : create a PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} DELETE : delete a PrometheusRule GET : read the specified PrometheusRule PATCH : partially update the specified PrometheusRule PUT : replace the specified PrometheusRule 7.2.1. /apis/monitoring.coreos.com/v1/prometheusrules Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PrometheusRule Table 7.2. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty 7.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules Table 7.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PrometheusRule Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PrometheusRule Table 7.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.8. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty HTTP method POST Description create a PrometheusRule Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.10. Body parameters Parameter Type Description body PrometheusRule schema Table 7.11. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 202 - Accepted PrometheusRule schema 401 - Unauthorized Empty 7.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the PrometheusRule namespace string object name and auth scope, such as for teams and projects Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PrometheusRule Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PrometheusRule Table 7.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.18. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PrometheusRule Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Patch schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PrometheusRule Table 7.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.23. Body parameters Parameter Type Description body PrometheusRule schema Table 7.24. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/prometheusrule-monitoring-coreos-com-v1
Chapter 6. Attaching your Red Hat Ansible Automation Platform subscription
Chapter 6. Attaching your Red Hat Ansible Automation Platform subscription You must have valid subscriptions attached on all nodes before installing Red Hat Ansible Automation Platform. Attaching your Ansible Automation Platform subscription allows you to access subcription-only resources necessary to proceed with the installation. Note Attaching a subscription is unnecessary if you have enabled Simple Content Access Mode on your Red Hat account. Once enabled, you will need to register your systems to either Red Hat Subscription Management (RHSM) or Satellite before installing the Ansible Automation Platform. See Simple Content Access Mode for more information. Procedure Obtain the pool_id for your Red Hat Ansible Automation Platform subscription: # subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6 Note Do not use MCT4022 as a pool_id for your subscription because it can cause Ansible Automation Platform subscription attachment to fail. Example An example output of the subsciption-manager list command. Obtain the pool_id as seen in the Pool ID: section: Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1 Attach the subscription: # subscription-manager attach --pool=<pool_id> You have now attached your Red Hat Ansible Automation Platform subscriptions to all nodes. Verification Verify the subscription was successfully attached: # subscription-manager list --consumed Troubleshooting If you are unable to locate certain packages that came bundled with the Ansible Automation Platform installer, or if you are seeing a Repositories disabled by configuration message, try enabling the repository using the command: Red Hat Ansible Automation Platform 2.3 for RHEL 8 subscription-manager repos --enable ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms Red Hat Ansible Automation Platform 2.3 for RHEL 9 subscription-manager repos --enable ansible-automation-platform-2.3-for-rhel-9-x86_64-rpms
[ "subscription-manager list --available --all | grep \"Ansible Automation Platform\" -B 3 -A 6", "Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1", "subscription-manager attach --pool=<pool_id>", "subscription-manager list --consumed", "subscription-manager repos --enable ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable ansible-automation-platform-2.3-for-rhel-9-x86_64-rpms" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/proc-attaching-subscriptions_planning
Chapter 30. Load balancing with MetalLB
Chapter 30. Load balancing with MetalLB 30.1. About MetalLB and the MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. The external IP address is added to the host network for your cluster. 30.1.1. When to use MetalLB Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address. You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster. After deploying MetalLB with the MetalLB Operator, when you add a service of type LoadBalancer , MetalLB provides a platform-native load balancer. MetalLB operating in layer2 mode provides support for failover by utilizing a mechanism similar to IP failover. However, instead of relying on the virtual router redundancy protocol (VRRP) and keepalived, MetalLB leverages a gossip-based protocol to identify instances of node failure. When a failover is detected, another node assumes the role of the leader node, and a gratuitous ARP message is dispatched to broadcast this change. MetalLB operating in layer3 or border gateway protocol (BGP) mode delegates failure detection to the network. The BGP router or routers that the OpenShift Container Platform nodes have established a connection with will identify any node failure and terminate the routes to that node. Using MetalLB instead of IP failover is preferable for ensuring high availability of pods and services. 30.1.2. MetalLB Operator custom resources The MetalLB Operator monitors its own namespace for the following custom resources: MetalLB When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster. IPAddressPool MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type LoadBalancer . An IPAddressPool includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An IPAddressPool requires a name. The documentation uses names like doc-example , doc-example-reserved , and doc-example-ipv6 . An IPAddressPool assigns IP addresses from the pool. L2Advertisement and BGPAdvertisement custom resources enable the advertisement of a given IP from a given pool. Note A single IPAddressPool can be referenced by a L2 advertisement and a BGP advertisement. BGPPeer The BGP peer custom resource identifies the BGP router for MetalLB to communicate with, the AS number of the router, the AS number for MetalLB, and customizations for route advertisement. MetalLB advertises the routes for service load-balancer IP addresses to one or more BGP peers. BFDProfile The BFD profile custom resource configures Bidirectional Forwarding Detection (BFD) for a BGP peer. BFD provides faster path failure detection than BGP alone provides. L2Advertisement The L2Advertisement custom resource advertises an IP coming from an IPAddressPool using the L2 protocol. BGPAdvertisement The BGPAdvertisement custom resource advertises an IP coming from an IPAddressPool using the BGP protocol. After you add the MetalLB custom resource to the cluster and the Operator deploys MetalLB, the controller and speaker MetalLB software components begin running. MetalLB validates all relevant custom resources. 30.1.3. MetalLB software components When you install the MetalLB Operator, the metallb-operator-controller-manager deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources. When the Operator starts an instance of MetalLB, it starts a controller deployment and a speaker daemon set. controller The Operator starts the deployment and a single pod. When you add a service of type LoadBalancer , Kubernetes uses the controller to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your controller pod logs: Example output "event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller speaker The Operator starts a daemon set for speaker pods. By default, a pod is started on each node in your cluster. You can limit the pods to specific nodes by specifying a node selector in the MetalLB custom resource when you start MetalLB. If the controller allocated the IP address to the service and service is still unavailable, read the speaker pod logs. If the speaker pod is unavailable, run the oc describe pod -n command. For layer 2 mode, after the controller allocates an IP address for the service, the speaker pods use an algorithm to determine which speaker pod on which node will announce the load balancer IP address. The algorithm involves hashing the node name and the load balancer IP address. For more information, see "MetalLB and external traffic policy". The speaker uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses. For Border Gateway Protocol (BGP) mode, after the controller allocates an IP address for the service, each speaker pod advertises the load balancer IP address with its BGP peers. You can configure which nodes start BGP sessions with BGP peers. Requests for the load balancer IP address are routed to the node with the speaker that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established. 30.1.4. MetalLB and external traffic policy With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. With BGP mode, a router on the host network opens a connection to one of the nodes in the cluster for a new client connection. How your cluster handles the traffic after it enters the node is affected by the external traffic policy. cluster This is the default value for spec.externalTrafficPolicy . With the cluster traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client. local With the local traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the speaker pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed. This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections. Note The following information is important when configuring the external traffic policy in BGP mode. Although MetalLB advertises the load balancer IP address from all the eligible nodes, the number of nodes loadbalancing the service can be limited by the capacity of the router to establish equal-cost multipath (ECMP) routes. If the number of nodes advertising the IP is greater than the ECMP group limit of the router, the router will use less nodes than the ones advertising the IP. For example, if the external traffic policy is set to local and the router has an ECMP group limit set to 16 and the pods implementing a LoadBalancer service are deployed on 30 nodes, this would result in pods deployed on 14 nodes not receiving any traffic. In this situation, it would be preferable to set the external traffic policy for the service to cluster . 30.1.5. MetalLB concepts for layer 2 mode In layer 2 mode, the speaker pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface. Note In layer 2 mode, MetalLB relies on ARP and NDP. These protocols implement local address resolution within a specific subnet. In this context, the client must be able to reach the VIP assigned by MetalLB that exists on the same subnet as the nodes announcing the service in order for MetalLB to work. The speaker pod responds to ARP requests for IPv4 services and NDP requests for IPv6. In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a speaker pod becomes unavailable, a speaker pod on a different node can announce the service IP address. When a node becomes unavailable, failover is automatic. The speaker pods on the other nodes detect that a node is unavailable and a new speaker pod and node take ownership of the service IP address from the failed node. The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has a cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 192.168.100.200 . Nodes 1 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. The speaker pod on node 1 uses ARP to announce the external IP address for the service, 192.168.100.200 . The speaker pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the Ready condition. Client traffic is routed to the host network and connects to the 192.168.100.200 IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If the external traffic policy for the service is set to cluster , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running. Only that node can receive traffic for the service. If the external traffic policy for the service is set to local , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running and at least an endpoint of the service. Only that node can receive traffic for the service. In the preceding graphic, either node 1 or 3 would advertise 192.168.100.200 . If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the speaker pod begins to announce the external IP address, 192.168.100.200 and the new node receives the client traffic. In the diagram, the only candidate is node 3. 30.1.6. MetalLB concepts for BGP mode In BGP mode, by default each speaker pod advertises the load balancer IP address for a service to each BGP peer. It is also possible to advertise the IPs coming from a given pool to a specific set of peers by adding an optional list of BGP peers. BGP peers are commonly network routers that are configured to use the BGP protocol. When a router receives traffic for the load balancer IP address, the router picks one of the nodes with a speaker pod that advertised the IP address. The router sends the traffic to that node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. The directly-connected router on the same layer 2 network segment as the cluster nodes can be configured as a BGP peer. If the directly-connected router is not configured as a BGP peer, you need to configure your network so that packets for load balancer IP addresses are routed between the BGP peers and the cluster nodes that run the speaker pods. Each time a router receives new traffic for the load balancer IP address, it creates a new connection to a node. Each router manufacturer has an implementation-specific algorithm for choosing which node to initiate the connection with. However, the algorithms commonly are designed to distribute traffic across the available nodes for the purpose of balancing the network load. If a node becomes unavailable, the router initiates a new connection with another node that has a speaker pod that advertises the load balancer IP address. Figure 30.1. MetalLB topology diagram for BGP mode The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has an IPv4 cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 203.0.113.200 . Nodes 2 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. You can configure MetalLB to specify which nodes run the speaker pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. Each speaker pod starts a BGP session with all BGP peers and advertises the load balancer IP addresses or aggregated routes to the BGP peers. The speaker pods advertise that they are part of Autonomous System 65010. The diagram shows a router, R1, as a BGP peer within the same Autonomous System. However, you can configure MetalLB to start BGP sessions with peers that belong to other Autonomous Systems. All the nodes with a speaker pod that advertises the load balancer IP address can receive traffic for the service. If the external traffic policy for the service is set to cluster , all the nodes where a speaker pod is running advertise the 203.0.113.200 load balancer IP address and all the nodes with a speaker pod can receive traffic for the service. The host prefix is advertised to the router peer only if the external traffic policy is set to cluster. If the external traffic policy for the service is set to local , then all the nodes where a speaker pod is running and at least an endpoint of the service is running can advertise the 203.0.113.200 load balancer IP address. Only those nodes can receive traffic for the service. In the preceding graphic, nodes 2 and 3 would advertise 203.0.113.200 . You can configure MetalLB to control which speaker pods start BGP sessions with specific BGP peers by specifying a node selector when you add a BGP peer custom resource. Any routers, such as R1, that are configured to use BGP can be set as BGP peers. Client traffic is routed to one of the nodes on the host network. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If a node becomes unavailable, the router detects the failure and initiates a new connection with another node. You can configure MetalLB to use a Bidirectional Forwarding Detection (BFD) profile for BGP peers. BFD provides faster link failure detection so that routers can initiate new connections earlier than without BFD. 30.1.7. Limitations and restrictions 30.1.7.1. Infrastructure considerations for MetalLB MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator: Bare metal VMware vSphere MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers. 30.1.7.2. Limitations for layer 2 mode 30.1.7.2.1. Single-node bottleneck MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance. Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic. 30.1.7.2.2. Slow failover performance Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed. Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update. Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems. To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh. During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries. 30.1.7.2.3. Additional Network and MetalLB cannot use same network Using the same VLAN for both MetalLB and an additional network interface set up on a source pod might result in a connection failure. This occurs when both the MetalLB IP and the source pod reside on the same node. To avoid connection failures, place the MetalLB IP in a different subnet from the one where the source pod resides. This configuration ensures that traffic from the source pod will take the default gateway. Consequently, the traffic can effectively reach its destination by using the OVN overlay network, ensuring that the connection functions as intended. 30.1.7.3. Limitations for BGP mode 30.1.7.3.1. Node failure can break all active connections MetalLB shares a limitation that is common to BGP-based load balancing. When a BGP session terminates, such as when a node fails or when a speaker pod restarts, the session termination might result in resetting all active connections. End users can experience a Connection reset by peer message. The consequence of a terminated BGP session is implementation-specific for each router manufacturer. However, you can anticipate that a change in the number of speaker pods affects the number of BGP sessions and that active connections with BGP peers will break. To avoid or reduce the likelihood of a service interruption, you can specify a node selector when you add a BGP peer. By limiting the number of nodes that start BGP sessions, a fault on a node that does not have a BGP session has no affect on connections to the service. 30.1.7.3.2. Support for a single ASN and a single router ID only When you add a BGP peer custom resource, you specify the spec.myASN field to identify the Autonomous System Number (ASN) that MetalLB belongs to. OpenShift Container Platform uses an implementation of BGP with MetalLB that requires MetalLB to belong to a single ASN. If you attempt to add a BGP peer and specify a different value for spec.myASN than an existing BGP peer custom resource, you receive an error. Similarly, when you add a BGP peer custom resource, the spec.routerID field is optional. If you specify a value for this field, you must specify the same value for all other BGP peer custom resources that you add. The limitation to support a single ASN and single router ID is a difference with the community-supported implementation of MetalLB. 30.1.8. Additional resources Comparison: Fault tolerant access to external IP addresses Removing IP failover 30.2. Installing the MetalLB Operator As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster. MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator. 30.2.1. Installing the MetalLB Operator from the OperatorHub using the web console As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type metallb to find the MetalLB Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. On the Install Operator page, accept the defaults and click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully, check the status of the Operator and review the logs: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 30.2.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI ( oc ) to install the MetalLB Operator. It is recommended that when using the CLI you install the Operator in the metallb-system namespace. Prerequisites A cluster installed on bare-metal hardware. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the MetalLB Operator by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF Create an Operator group custom resource (CR) in the namespace: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF Confirm the Operator group is installed in the namespace: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-operator 14m Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, metallb-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace 1 You must specify the redhat-operators value. To create the Subscription CR, run the following command: USD oc create -f metallb-sub.yaml Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command: USD oc label ns metallb-system "openshift.io/cluster-monitoring=true" Verification The verification steps assume the MetalLB Operator is installed in the metallb-system namespace. Confirm the install plan is in the namespace: USD oc get installplan -n metallb-system Example output NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.11.0-nnnnnnnnnnnn Automatic true Note Installation of the Operator might take a few seconds. To verify that the Operator is installed, enter the following command: USD oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase metallb-operator.4.11.0-nnnnnnnnnnnn Succeeded 30.2.3. Starting MetalLB on your cluster After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the MetalLB Operator. Procedure This procedure assumes the MetalLB Operator is installed in the metallb-system namespace. If you installed using the web console substitute openshift-operators for the namespace. Create a single instance of a MetalLB custom resource: USD cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF Verification Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running. Verify that the deployment for the controller is running: USD oc get deployment -n metallb-system controller Example output NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m Verify that the daemon set for the speaker is running: USD oc get daemonset -n metallb-system speaker Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster. 30.2.3.1. Limit speaker pods to specific nodes By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. Only the nodes with a speaker pod can advertise a load balancer IP address. You can configure the MetalLB custom resource with a node selector to specify which nodes run the speaker pods. The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker pod are advertised as destinations of the load balancer IP address. If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes. Example configuration to limit speaker pods to worker nodes apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: "" speakerTolerations: <.> - key: "Example" operator: "Exists" effect: "NoExecute" <.> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. <.> In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker= . You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources. 30.2.4. Additional resources Placing pods on specific nodes using node selectors . Understanding taints and tolerations . 30.2.5. steps Configuring MetalLB address pools 30.3. Upgrading the MetalLB Operator The automatic upgrade procedure does not work as expected from OpenShift Container Platform 4.10 and earlier. A summary of the upgrade procedure is as follows: Delete the previously installed Operator version for example 4.10. Ensure that the namespace and the metallb custom resource are not removed. Install the 4.11 version of the Operator using the CLI. Install the 4.11 version of the Operator in the same namespace that the previously installed Operator version was installed to. Note This procedure does not apply to automatic z-stream updates of the MetalLB Operator, which follow the standard straightforward method. For detailed steps to upgrade the MetalLB Operator from 4.10 and earlier, see the guidance that follows. As a cluster administrator, start the upgrade process by deleting the MetalLB Operator by using the OpenShift CLI ( oc ) or the web console. 30.3.1. Deleting the MetalLB Operator from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Search for the MetalLB Operator. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 30.3.2. Deleting MetalLB Operator from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed MetalLB Operator in the currentCSV field: USD oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV Example output currentCSV: metallb-operator.4.10.0-202207051316 Delete the subscription: USD oc delete subscription metallb-operator -n metallb-system Example output subscription.operators.coreos.com "metallb-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system Example output clusterserviceversion.operators.coreos.com "metallb-operator.4.10.0-202207051316" deleted 30.3.3. Editing the MetalLB Operator Operator group When upgrading from any MetalLB Operator version up to and including 4.10 to 4.11 and later, remove spec.targetNamespaces from the Operator group custom resource (CR). You must remove the spec regardless of whether you used the web console or the CLI to delete the MetalLB Operator. Note The MetalLB Operator version 4.11 or later only supports the AllNamespaces install mode, whereas 4.10 or earlier versions support OwnNamespace or SingleNamespace modes. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the Operator groups in the metallb-system namespace by running the following command: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-system-7jc66 85m Verify that the spec.targetNamespaces is present in the Operator group CR associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "25027" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: "2023-10-25T09:42:49Z" namespaces: - metallb-system Edit the Operator group and remove the targetNamespaces and metallb-system present under the spec section by running the following command: USD oc edit n metallb-system Example output operatorgroup.operators.coreos.com/metallb-system-7jc66 edited Verify the spec.targetNamespaces is removed from the Operator group custom resource associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "61658" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: "2023-10-25T14:31:30Z" namespaces: - "" 30.3.4. Upgrading the MetalLB Operator Prerequisites Access the cluster as a user with the cluster-admin role. Procedure Verify that the metallb-system namespace still exists: USD oc get namespaces | grep metallb-system Example output metallb-system Active 31m Verify the metallb custom resource still exists: USD oc get metallb -n metallb-system Example output NAME AGE metallb 33m Follow the guidance in "Installing from OperatorHub using the CLI" to install the latest 4.11 version of the MetalLB Operator. Note When installing the latest 4.11 version of the MetalLB Operator, you must install the Operator to the same namespace it was previously installed to. Verify the upgraded version of the Operator is now the 4.11 version. USD oc get csv -n metallb-system Example output NAME DISPLAY VERSION REPLACES PHASE metallb-operator.{product-version}.0-202207051316 MetalLB Operator {product-version}.0-202207051316 Succeeded 30.3.5. Additional resources Deleting Operators from a cluster Installing the MetalLB Operator 30.4. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . 30.4.1. About the IPAddressPool custom resource Note The address pool custom resource definition (CRD) and API documented in "Load balancing with MetalLB" in OpenShift Container Platform 4.10 can still be used in 4.11. However, the enhanced functionality associated with advertising the IPAddressPools with layer 2 or the BGP protocol is not supported when using the address pool CRD. The fields for the IPAddressPool custom resource are described in the following table. Table 30.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . 30.4.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 30.4.3. Example address pool configurations 30.4.3.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 30.4.3.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 30.4.3.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 30.4.4. Additional resources Configuring MetalLB with an L2 advertisement and label . 30.4.5. steps For BGP mode, see Configuring MetalLB BGP peers . Configuring services to use MetalLB . 30.5. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 30.5.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 30.2. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Note The functionality this supports is Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. spec.peers string Optional: Peers limits the BGP peer to advertise the IPs of the selected pools to. When empty, the load balancer IP is announced to all the BGP peers configured. 30.5.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 30.5.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 30.5.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 30.5.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 30.5.4. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 30.3. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 30.5.5. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 30.5.6. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 30.5.7. Additional resources Configuring a community alias . 30.6. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 30.6.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 30.4. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 65535 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 65535 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 30.6.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 30.6.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 30.6.4. Example BGP peer configurations 30.6.4.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 30.6.4.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 30.6.4.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 30.6.5. steps Configuring services to use MetalLB 30.7. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 30.7.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 30.5. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of IP addresses for MetalLB to assign to services. You can specify multiple ranges in a single pool, they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. Table 30.6. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 30.7.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE - value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - community1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer Apply the configuration: USD oc apply -f bgpadvertisement.yaml 30.8. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 30.8.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 30.7. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 30.8.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 30.8.3. steps Configure a BGP peer to use the BFD profile. 30.9. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 30.9.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 30.9.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 30.9.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 30.9.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8 1 5 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 6 Specify different port numbers for the services. 3 7 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 8 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 30.9.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output <.> The annotation is present if you request an IP address from a specific pool. <.> The service type must indicate LoadBalancer . <.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly. <.> The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 30.10. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 30.10.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 30.10.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 30.8. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 30.10.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output <.> The router bgp section indicates the ASN for MetalLB. <.> Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. <.> If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. <.> Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 1 3 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 4 2 4 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output <.> Confirm that the output includes an IP address for a BGP peer. 30.10.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 30.10.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following metrics that are related to MetalLB and BGP peers and BFD profiles: metallb_bfd_control_packet_input counts the number of BFD control packets received from each BFD peer. metallb_bfd_control_packet_output counts the number of BFD control packets sent to each BFD peer. metallb_bfd_echo_packet_input counts the number of BFD echo packets received from each BFD peer. metallb_bfd_echo_packet_output counts the number of BFD echo packets sent to each BFD peer. metallb_bfd_session_down_events counts the number of times the BFD session with a peer entered the down state. metallb_bfd_session_up indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bfd_session_up_events counts the number of times the BFD session with a peer entered the up state. metallb_bfd_zebra_notifications counts the number of BFD Zebra notifications for each BFD peer. metallb_bgp_announced_prefixes_total counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. metallb_bgp_session_up indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bgp_updates_total counts the number of BGP update messages that were sent to a BGP peer. Additional resources See Querying metrics for information about using the monitoring dashboard. 30.10.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster
[ "\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-operator 14m", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace", "oc create -f metallb-sub.yaml", "oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"", "oc get installplan -n metallb-system", "NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.11.0-nnnnnnnnnnnn Automatic true", "oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase metallb-operator.4.11.0-nnnnnnnnnnnn Succeeded", "cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF", "oc get deployment -n metallb-system controller", "NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m", "oc get daemonset -n metallb-system speaker", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: \"\" speakerTolerations: <.> - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"", "oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV", "currentCSV: metallb-operator.4.10.0-202207051316", "oc delete subscription metallb-operator -n metallb-system", "subscription.operators.coreos.com \"metallb-operator\" deleted", "oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system", "clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-system-7jc66 85m", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system", "oc edit n metallb-system", "operatorgroup.operators.coreos.com/metallb-system-7jc66 edited", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"", "oc get namespaces | grep metallb-system", "metallb-system Active 31m", "oc get metallb -n metallb-system", "NAME AGE metallb 33m", "oc get csv -n metallb-system", "NAME DISPLAY VERSION REPLACES PHASE metallb-operator.{product-version}.0-202207051316 MetalLB Operator {product-version}.0-202207051316 Succeeded", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f ipaddresspool.yaml", "oc describe -n metallb-system IPAddressPool doc-example", "Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "oc apply -f ipaddresspool1.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400", "oc apply -f ipaddresspool2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer1.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer2.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE - value: '65535:65282'", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - community1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254", "oc apply -f bfdprofile.yaml", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc replace -f setdebugloglevel.yaml", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s", "oc logs -n metallb-system speaker-7m4qw -c speaker", "{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}", "oc logs -n metallb-system speaker-7m4qw -c frr", "Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"", "Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 4 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full 5 neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 6 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 7 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full 8 transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"", "IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 3 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 4 Total number of neighbors 2", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"", "BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 <.> Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"", "Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/load-balancing-with-metallb
Chapter 5. Known issues
Chapter 5. Known issues Sometimes a Cryostat release might contain an issue or issues that Red Hat acknowledges and might fix at a later stage during the product's development. Review each known issue for its description and its resolution. Cryostat agent cannot accept the ALL event template in recording start requests Description When creating a JFR recording for a target JVM that is using the Cryostat agent, if you select the ALL event template, the Cryostat agent returns an HTTP 400 error. Workaround Depending on your requirements, you can do either of the following: If you want to use the ALL event template, use a JMX connection with your target JVM. If you want to use a Cryostat agent connection with your target JVM, use a different event template such as Profiling. Red Hat Insights integration fails on ARM64 architecture Description In the Cryostat Operator's namespace, a pod whose name begins with insights-proxy is in the ImagePullBackOff error state. This is because the APICast container used for this pod is not built for the ARM64 architecture. However, Cryostat and all applications that are using the Cryostat agent should still function normally. Workaround To disable Red Hat Insights integration: Navigate to Operators > Installed Operators . Select the Red Hat build of Cryostat operator. Select the Subscription tab. From the Actions drop-down menu, select Edit Subscription . Add the INSIGHTS_ENABLED environment variable to the Subscription object: Click Save . This workaround disables Red Hat Insights integration in the Cryostat Operator, which removes the insights-proxy deployment.
[ "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cryostat-operator namespace: openshift-operators spec: config: env: - name: INSIGHTS_ENABLED value: \"false\" channel: stable installPlanApproval: Automatic name: cryostat-operator source: my-operator-catalog sourceNamespace: openshift-marketplace startingCSV: cryostat-operator.v2.4.0" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.4/cryostat-2-4-known-issues_cryostat
Chapter 1. Role APIs
Chapter 1. Role APIs 1.1. ClusterRoleBinding [authorization.openshift.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference any ClusterRole in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. ClusterRoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. RoleBindingRestriction [authorization.openshift.io/v1] Description RoleBindingRestriction is an object that can be matched against a subject (user, group, or service account) to determine whether rolebindings on that subject are allowed in the namespace to which the RoleBindingRestriction belongs. If any one of those RoleBindingRestriction objects matches a subject, rolebindings on that subject in the namespace are allowed. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Role [authorization.openshift.io/v1] Description Role is a logical grouping of PolicyRules that can be referenced as a unit by RoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/role_apis/role-apis
4.4. Adding users and groups to an Image Builder blueprint in the web console interface
4.4. Adding users and groups to an Image Builder blueprint in the web console interface Adding customizations such as users and groups to blueprints in the web console interface is currently not possible. To work around this limitation, use the Terminal tab in web console to use the command-line interface (CLI) workflow. Prerequisites A blueprint must exist. A CLI text editor such as vim , nano , or emacs must be installed. To install them: Procedure 1. Find out the name of the blueprint: Open the Image Builder ( Image builder ) tab on the left in the RHEL 7 web console to see the name of the blueprint. 2. Navigate to the CLI in web console: Open the system administration tab on the left, then select the last item Terminal from the list on the left. 3. Enter the super-user (root) mode: Provide your credentials when asked. Note that the terminal does not reuse your credentials you entered when logging into the web console. A new shell with root privileges starts in your home directory. 4. Export the blueprint to a file: 5. Edit the file BLUEPRINT-NAME .toml with a CLI text editor of your choice and add the users and groups. Important RHEL 7 web console does not have any built-in feature to edit text files on the system, so the use of a CLI text editor is required for this step. i. For every user to be added, add this block to the file: Replace PASSWORD-HASH with the actual password hash. To generate the hash, use a command such as this: Replace ssh-rsa (...) key-name with the actual public key. Replace the other placeholders with suitable values. Leave out any of the lines as needed, only the user name is required. ii. For every user group to be added, add this block to the file: iii. Increase the version number. iv. Save the file and close the editor. 6. Import the blueprint back into Image Builder: Note that you must supply the file name including the .toml extension, while in other commands you use only the name of the blueprint. 7. To verify that the contents uploaded to Image Builder match your edits, list the contents of blueprint: Check if the version matches what you put in the file and if your customizations are present. Important The Image Builder plug-in for RHEL 7 web console does not show any information that could be used to verify that the changes have been applied, unless you edited also the packages included in the blueprint. 8. Exit the privileged shell: 9. Open the Image Builder (Image builder) tab on the left and refresh the page, in all browsers and all tabs where it was opened. This prevents state cached in the loaded page from accidentally reverting your changes. Additional information Section 3.6, " Image Builder blueprint format " Section 3.3, " Editing an Image Builder blueprint with command-line interface "
[ "yum install editor-name", "sudo bash", "composer-cli blueprints save BLUEPRINT-NAME", "[[customization.user]] name = \" USER-NAME \" description = \" USER-DESCRIPTION \" password = \" PASSWORD-HASH \" key = \" ssh-rsa (...) key-name \" home = \"/home/ USER-NAME /\" shell = \" /usr/bin/bash \" groups = [ \"users\", \"wheel\" ] uid = NUMBER gid = NUMBER", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "[[customizations.group]] name = \" GROUP-NAME \" gid = NUMBER", "composer-cli blueprints push BLUEPRINT-NAME.toml", "composer-cli blueprints show BLUEPRINT-NAME", "exit" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_4
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. For simple comments on specific passages: Ensure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting feedback via Bugzilla, create a new ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/release_notes/providing-feedback-on-red-hat-documentation_satellite
16.2. Supported Active Directory Versions
16.2. Supported Active Directory Versions See the corresponding section for the latest Directory Server version in the Red Hat Directory Server Release Notes .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/supported-ad
Chapter 47. limit
Chapter 47. limit This chapter describes the commands under the limit command. 47.1. limit create Create a limit Usage: Table 47.1. Positional arguments Value Summary <resource-name> The name of the resource to limit Table 47.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the limit --region <region> Region for the limit to affect. --project <project> Project to associate the resource limit to --service <service> Service responsible for the resource to limit --resource-limit <resource-limit> The resource limit for the project to assume Table 47.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 47.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 47.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.2. limit delete Delete a limit Usage: Table 47.7. Positional arguments Value Summary <limit-id> Limit to delete (id) Table 47.8. Command arguments Value Summary -h, --help Show this help message and exit 47.3. limit list List limits Usage: Table 47.9. Command arguments Value Summary -h, --help Show this help message and exit --service <service> Service responsible for the resource to limit --resource-name <resource-name> The name of the resource to limit --region <region> Region for the registered limit to affect. --project <project> List resource limits associated with project Table 47.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 47.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 47.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 47.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.4. limit set Update information about a limit Usage: Table 47.14. Positional arguments Value Summary <limit-id> Limit to update (id) Table 47.15. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the limit --resource-limit <resource-limit> The resource limit for the project to assume Table 47.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 47.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 47.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.5. limit show Display limit details Usage: Table 47.20. Positional arguments Value Summary <limit-id> Limit to display (id) Table 47.21. Command arguments Value Summary -h, --help Show this help message and exit Table 47.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 47.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 47.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack limit create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--region <region>] --project <project> --service <service> --resource-limit <resource-limit> <resource-name>", "openstack limit delete [-h] <limit-id> [<limit-id> ...]", "openstack limit list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--service <service>] [--resource-name <resource-name>] [--region <region>] [--project <project>]", "openstack limit set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--resource-limit <resource-limit>] <limit-id>", "openstack limit show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <limit-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/limit