title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Using VS Code Debug Adapter for Apache Camel extension
Chapter 2. Using VS Code Debug Adapter for Apache Camel extension Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage This is the Visual Studio Code extension that adds Camel Debugger power by attaching to a running Camel route written in Java, Yaml or XML DSL. 2.1. Features of Debug Adapter The VS Code Debug Adapter for Apache Camel extension supports the following features: Camel Main mode for XML only. The use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the JMX url. The local use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the PID. You can use it for a single Camel context. Add or remove the breakpoints. The conditional breakpoints with simple language. Inspecting the variable values on suspended breakpoints. Resume a single route instance and resume all route instances. Stepping when the route definition is in the same file. Allow to update variables in scope Debugger, in the message body, in a message header of type String, and an exchange property of type String Supports the command Run Camel Application with JBang and Debug . This command allows a one-click start and Camel debug in simple cases. This command is available through: Command Palette. It requires a valid Camel file opened in the current editor. Contextual menu in File explorer. It is visible to all *.xml , *.java , *.yaml and *.yml . Codelens at the top of a Camel file (the heuristic for the codelens is checking that there is a from and a to or a log on java , xml , and yaml files). Supports the command Run Camel application with JBang . It requires a valid Camel file defined in Yaml DSL (.yaml|.yml) opened in editor. Configuration snippets for Camel debugger launch configuration Configuration snippets to launch a Camel application ready to accept a Camel debugger connection using JBang, or a Maven with Camel maven plugin 2.2. Requirements Following points must be considered when using the VS Code Debug Adapter for Apache Camel extension: Java Runtime Environment 11 or later with com.sun.tools.attach.VirtualMachine (available in most JVMs such as Hotspot and OpenJDK) must be installed. The Camel instance to debug must follow these requirements: Camel 3.16 or later Have camel-debug on the classpath. Have JMX enabled. Note For some features, The JBang must be available on a system commandline. 2.3. Installing VS Code Debug Adapter for Apache Camel You can download the VS Code Debug Adapter for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Debug Adapter for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel Debug . Select the Debug Adapter for Apache Camel option from the search results and then click Install. This installs the Debug Adapter for Apache Camel in the VS Code editor. 2.4. Using Debug Adapter Following procedure explains how to debug a camel application using the debug adapter. Procedure Ensure that the jbang binary is available on the system commandline. Open a Camel route which can be started with Camel JBang. Call the command Palette using the keys Ctrl + Alt + P , and select the Run Camel Application with JBang and Debug command or click on the codelens Camel Debug with JBang that appears on top of the file. Wait until the route is started and debugger is connected. Put a breakpoint on the Camel route. Debug. Additional resources Debug Adapter for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_user_guide/csb-vscode-debug-adapter-extension
2.2.7.5. Configuring Postfix to Use SASL
2.2.7.5. Configuring Postfix to Use SASL The Red Hat Enterprise Linux version of Postfix can use the Dovecot or Cyrus SASL implementations for SMTP Authentication (or SMTP AUTH ). SMTP Authentication is an extension of the Simple Mail Transfer Protocol . When enabled, SMTP clients are required to authenticate to the SMTP server using an authentication method supported and accepted by both the server and the client. This section describes how to configure Postfix to make use of the Dovecot SASL implementation. To install the Dovecot POP / IMAP server, and thus make the Dovecot SASL implementation available on your system, issue the following command as the root user: The Postfix SMTP server can communicate with the Dovecot SASL implementation using either a UNIX-domain socket or a TCP socket . The latter method is only needed in case the Postfix and Dovecot applications are running on separate machines. This guide gives preference to the UNIX-domain socket method, which affords better privacy. In order to instruct Postfix to use the Dovecot SASL implementation, a number of configuration changes need to be performed for both applications. Follow the procedures below to effect these changes. Setting Up Dovecot Modify the main Dovecot configuration file, /etc/dovecot/conf.d/10-master.conf , to include the following lines (the default configuration file already includes most of the relevant section, and the lines just need to be uncommented): The above example assumes the use of UNIX-domain sockets for communication between Postfix and Dovecot . It also assumes default settings of the Postfix SMTP server, which include the mail queue located in the /var/spool/postfix/ directory, and the application running under the postfix user and group. In this way, read and write permissions are limited to the postfix user and group. Alternatively, you can use the following configuration to set up Dovecot to listen for Postfix authentication requests via TCP : In the above example, replace 12345 with the number of the port you want to use. Edit the /etc/dovecot/conf.d/10-auth.conf configuration file to instruct Dovecot to provide the Postfix SMTP server with the plain and login authentication mechanisms: Setting Up Postfix In the case of Postfix , only the main configuration file, /etc/postfix/main.cf , needs to be modified. Add or edit the following configuration directives: Enable SMTP Authentication in the Postfix SMTP server: Instruct Postfix to use the Dovecot SASL implementation for SMTP Authentication: Provide the authentication path relative to the Postfix queue directory (note that the use of a relative path ensures that the configuration works regardless of whether the Postfix server runs in a chroot or not): This step assumes that you want to use UNIX-domain sockets for communication between Postfix and Dovecot . To configure Postfix to look for Dovecot on a different machine in case you use TCP sockets for communication, use configuration values similar to the following: In the above example, 127.0.0.1 needs to be substituted by the IP address of the Dovecot machine and 12345 by the port specified in Dovecot 's /etc/dovecot/conf.d/10-master.conf configuration file. Specify SASL mechanisms that the Postfix SMTP server makes available to clients. Note that different mechanisms can be specified for encrypted and unencrypted sessions. The above example specifies that during unencrypted sessions, no anonymous authentication is allowed and no mechanisms that transmit unencrypted usernames or passwords are allowed. For encrypted sessions (using TLS ), only non-anonymous authentication mechanisms are allowed. See http://www.postfix.org/SASL_README.html#smtpd_sasl_security_options for a list of all supported policies for limiting allowed SASL mechanisms. Additional Resources The following online resources provide additional information useful for configuring Postfix SMTP Authentication through SASL . http://wiki2.dovecot.org/HowTo/PostfixAndDovecotSASL - Contains information on how to set up Postfix to use the Dovecot SASL implementation for SMTP Authentication. http://www.postfix.org/SASL_README.html#server_sasl - Contains information on how to set up Postfix to use either the Dovecot or Cyrus SASL implementations for SMTP Authentication.
[ "~]# yum install dovecot", "service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix group = postfix } }", "service auth { inet_listener { port = 12345 } }", "auth_mechanisms = plain login", "smtpd_sasl_auth_enable = yes", "smtpd_sasl_type = dovecot", "smtpd_sasl_path = private/auth", "smtpd_sasl_path = inet: 127.0.0.1 : 12345", "smtpd_sasl_security_options = noanonymous, noplaintext smtpd_sasl_tls_security_options = noanonymous" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_postfix-configuring_postfix_to_use_sasl
Chapter 9. Tuning Database Link Performance
Chapter 9. Tuning Database Link Performance Database link performance can be improved through changes to the Directory Server's connection and thread management. 9.1. Managing Connections to the Remote Server Each database link maintains a pool of connections to a remote server. This section describes how to optimize them. 9.1.1. Managing Connections to the Remote Server Using the Command Line This section describes how you update the settings for a specific database, as well as the default settings. 9.1.1.1. Updating the Database Link Connection Management Settings for a Specific Database To update the database link connection management settings for a specific database: Use the following command to update a setting for a database link: For a list of parameters you can set, enter: Restart the Directory Server instance: 9.1.1.2. Updating the Default Database Link Connection Management Settings To update the default database link connection management settings, use the following command: For a list of parameters you can set, enter: 9.1.2. Managing Connections to the Remote Server Using the Web Console This section describes how you update the settings for a specific database, as well as the default settings. 9.1.2.1. Updating the Database Link Connection Management Settings for a Specific Database To update the database link connection management settings for a specific database: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select the database link configuration you want to update. Click Show Advanced Settings . Update the fields in the advanced settings area: To display a tooltip and the corresponding attribute name in the cn=config entry for a parameter, hover the mouse cursor over the setting. For further details, see the parameter's description in the Red Hat Directory Server Configuration, Command, and File Reference. . Click Save Configuration . Click the Actions button, and select Restart Instance . 9.1.2.2. Updating the Default Database Link Connection Management Settings To update the default database link connection management settings: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select Chaining Configuration . Update the fields in the Default Database Link Creation Settings area: To display a tooltip and the corresponding attribute name in the cn=config entry for a parameter, hover the mouse cursor over the setting. For further details, see the parameter's description in the Red Hat Directory Server Configuration, Command, and File Reference. . Click Save Default Settings . Click the Actions button, and select Restart Instance .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-set parameter = value link_name", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-set --help", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set-def parameter = value", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set-def --help" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/creating_and_maintaining_database_links-advanced_feature_tuning_database_link_performance
Chapter 3. Technology Preview and Deprecated Features
Chapter 3. Technology Preview and Deprecated Features 3.1. Technology Preview Features Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . The following table describes features available as Technology Previews in Red Hat Virtualization. Table 3.1. Technology Preview Features Technology Preview Feature Details NoVNC console option Option for opening a virtual machine console in the browser using HTML5. Websocket proxy Allows users to connect to virtual machines through a noVNC console. VDSM hook for nested virtualization Allows a virtual machine to serve as a host. Import Debian and Ubuntu virtual machines from VMware and RHEL 5 Xen Allows virt-v2v to convert Debian and Ubuntu virtual machines from VMware or RHEL 5 Xen to KVM. Known Issues: virt-v2v cannot change the default kernel in the GRUB2 configuration. The kernel configured on the guest operating system is not changed during the conversion, even if a more optimal version is available. After converting a Debian or Ubuntu virtual machine from VMware to KVM, the name of the virtual machine's network interface may change, and will need to be configured manually Open vSwitch cluster type support Adds Open vSwitch networking capabilities. moVirt Mobile Android app for Red Hat Virtualization. Shared and local storage in the same data center Allows the creation of single-brick Gluster volumes to enable local storage to be used as a storage domain in shared data centers. Cinderlib Integration Leverage CinderLib library to use Cinder-supported storage drivers in Red Hat Virtualization without a Full Cinder-OpenStack deployment. Adds support for Ceph storage along with Fibre Channel and iSCSI storage. The Cinder volume has multipath support on the Red Hat Virtualization Host. Intel Q35 Chipset Adds support for the Q35 machine type. Q35 is PCIe-enabled and can use UEFI (OVMF) BIOS and legacy BIOS (SeaBIOS). SSO with OpenID Connect Adds support for external OpenID Connect authentication using Keycloak in both the user interface and with the REST API. oVirt Engine Backup Adds support to back up and restore Red Hat Virtualization Manager with the Ansible ovirt-engine-backup role.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/tech_preview_and_deprecated_features
Chapter 1. Understanding zone failure
Chapter 1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/recovering_a_metro-dr_stretch_cluster/understanding-zone-failure
Chapter 3. Configuring IP Networking
Chapter 3. Configuring IP Networking As a system administrator, you can configure a network interface either using NetworkManager or not. 3.1. Selecting Network Configuration Methods To configure a network interface using NetworkManager , use one of the following tools: the text user interface tool, nmtui . For more details, see Section 3.2, "Configuring IP Networking with nmtui" . the command-line tool, nmcli . For more details, see Section 3.3, "Configuring IP Networking with nmcli" . the graphical user interface tools, GNOME GUI . For more details, see Section 3.4, " Configuring IP Networking with GNOME GUI " . To configure a network interface without using NetworkManager : edit the ifcfg files manually. For more details, see Section 3.5, "Configuring IP Networking with ifcfg Files" . use the ip commands. This can be used to assign IP addresses to an interface, but changes are not persistent across reboots; when you reboot, you will lose any changes. For more details, see Section 3.6, "Configuring IP Networking with ip Commands" . To configure the network settings when the root filesystem is not local: use the kernel command-line. For more details, see Section 3.7, "Configuring IP Networking from the Kernel Command line" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configuring_ip_networking
Chapter 5. Provisioning Environments
Chapter 5. Provisioning Environments Table 5.1. Provisioning Environments Subcommand Description and tasks domain Create a domain: subnet org loc Add a subnet: compute-resource org loc Create a compute resource: medium Add an installation medium: partition-table Add a partition table: template Add a provisioning template: os Add an operating system:
[ "hammer domain create --name domain_name", "hammer subnet create --name subnet_name --organization-ids org_ID1,... --location-ids loc_ID1,... --domain-ids dom_ID1,... --boot-mode boot_mode --network network_address --mask netmask --ipam ipam", "hammer compute-resource create --name cr_name --organization-ids org_ID1,... --location-ids loc_ID1,... --provider provider_name", "hammer medium create --name med_name --path path_to_medium", "hammer partition-table create --name tab_name --path path_to_file --os-family os_family", "hammer template create --name tmp_name --file path_to_template", "hammer os create --name os_name --version version_num" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/provisioning_environments
25.5. Types of Certificates
25.5. Types of Certificates If you installed your secure server from the RPM package provided by Red Hat, a random key and a test certificate are generated and put into the appropriate directories. Before you begin using your secure server, however, you must generate your own key and obtain a certificate which correctly identifies your server. You need a key and a certificate to operate your secure server - which means that you can either generate a self-signed certificate or purchase a CA-signed certificate from a CA. What are the differences between the two? A CA-signed certificate provides two important capabilities for your server: Browsers (usually) automatically recognize the certificate and allow a secure connection to be made, without prompting the user. When a CA issues a signed certificate, they are guaranteeing the identity of the organization that is providing the webpages to the browser. If your secure server is being accessed by the public at large, your secure server needs a certificate signed by a CA so that people who visit your website know that the website is owned by the organization who claims to own it. Before signing a certificate, a CA verifies that the organization requesting the certificate was actually who they claimed to be. Most Web browsers that support SSL have a list of CAs whose certificates they automatically accept. If a browser encounters a certificate whose authorizing CA is not in the list, the browser asks the user to either accept or decline the connection. You can generate a self-signed certificate for your secure server, but be aware that a self-signed certificate does not provide the same functionality as a CA-signed certificate. A self-signed certificate is not automatically recognized by most Web browsers and does not provide any guarantee concerning the identity of the organization that is providing the website. A CA-signed certificate provides both of these important capabilities for a secure server. If your secure server is to be used in a production environment, a CA-signed certificate is recommended. The process of getting a certificate from a CA is fairly easy. A quick overview is as follows: Create an encryption private and public key pair. Create a certificate request based on the public key. The certificate request contains information about your server and the company hosting it. Send the certificate request, along with documents proving your identity, to a CA. Red Hat does not make recommendations on which certificate authority to choose. Your decision may be based on your past experiences, on the experiences of your friends or colleagues, or purely on monetary factors. Once you have decided upon a CA, you need to follow the instructions they provide on how to obtain a certificate from them. When the CA is satisfied that you are indeed who you claim to be, they provide you with a digital certificate. Install this certificate on your secure server and begin handling secure transactions. Whether you are getting a certificate from a CA or generating your own self-signed certificate, the first step is to generate a key. Refer to Section 25.6, "Generating a Key" for instructions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/apache_http_secure_server_configuration-types_of_certificates
Chapter 11. Installation and Booting
Chapter 11. Installation and Booting A new network-scripts option: IFDOWN_ON_SHUTDOWN This update adds the IFDOWN_ON_SHUTDOWN option for network-scripts . Setting this option to yes , true , or leaving it empty has no effect. If you set this option to no , or false , it causes the ifdown calls to not be issued when stopping or restarting the network service. This can be useful in situations where NFS (or other network file system) mounts are in a stale state, because the network was shut down before the mount was cleanly unmounted. (BZ# 1583677 ) Improved content of error messages in network-scripts The network-scripts now display more verbose error messages when the installation of bonding drivers fails. (BZ#1542514) Booting from an iSCSI device that is not configured using iBFT is now supported This update provides a new installer boot option inst.nonibftiscsiboot that supports the installation of boot loader on an iSCSI device that has not been configured in the iSCSI Boot Firmware Table (iBFT). This update helps when the iBFT is not used for booting the installed system from an iSCSI device, for example, an iPXE boot from SAN features is used instead. The new installer boot option allows you to install the boot loader on an iSCSI device that is not automatically added as part of the iBFT configuration but is manually added using the iscsi Kickstart command or the installer GUI. (BZ# 1562301 ) Installing and booting from NVDIMM devices is now supported Prior to this update, Nonvolatile Dual Inline Memory Module (NVDIMM) devices in any mode were ignored by the installer. With this update, kernel improvements to support NVDIMM devices provide improved system performance capabilities and enhanced file system access for write-intensive applications like database or analytic workloads, as well as reduced CPU overhead. This update introduces support for: The use of NVDIMM devices for installation using the nvdimm Kickstart command and the GUI, making it possible to install and boot from NVDIMM devices in sector mode and reconfigure NVDIMM devices into sector mode during installation. The extension of Kickstart scripts for Anaconda with commands for handling NVDIMM devices. The ability of grub2 , efibootmgr , and efivar system components to handle and boot from NVDIMM devices. (BZ# 1612965 , BZ#1280500, BZ#1590319, BZ#1558942) The --noghost option has been added to the rpm -V command This update adds the --noghost option to the rpm -V command. If used with this option, rpm -V verifies only the non-ghost files that were altered, which helps diagnose system problems. (BZ# 1395818 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_installation_and_booting
23.8. Enforcing a Specific Authentication Indicator When Obtaining a Ticket from the KDC
23.8. Enforcing a Specific Authentication Indicator When Obtaining a Ticket from the KDC To enforce a specific authentication indicator on: A host object, execute: A Kerberos service, execute: To set multiple authentication indicators, specify the --auth-ind parameter multiple times. Warning Setting an authentication indicator to the HTTP/ IdM_master service causes the IdM master to fail. Additionally, the utilities provided by IdM do not enable you to restore the master. Example 23.2. Enforcing the pkinit Indicator on a Specific Host The following command configures that only the users authenticated through a smart card can obtain a service ticket for the host.idm.example.com host: The setting above ensures that the ticket-granting ticket (TGT) of a user requesting a Kerberos ticket, contains the pkinit authentication indicator.
[ "ipa host-mod host_name --auth-ind= indicator", "ipa service-mod service / host_name --auth-ind= indicator", "ipa host-mod host.idm.example.com --auth-ind=pkinit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/enforcing-a-specific-authentication-indicator-when-obtaining-a-ticket-from-the-kdc
Chapter 1. Building applications overview
Chapter 1. Building applications overview Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform. After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift CLI ( oc ) . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift CLI ( oc ). With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application, you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/building-applications-overview
Chapter 4. Installing a cluster on vSphere using the Assisted Installer
Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports the various deployment platforms with a focus on the following infrastructures: Bare metal Nutanix vSphere 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/installing-vsphere-assisted-installer
Chapter 2. Clair concepts
Chapter 2. Clair concepts The following sections provide a conceptual overview of how Clair works. 2.1. Clair in practice A Clair analysis is broken down into three distinct parts: indexing, matching, and notification. 2.1.1. Indexing Clair's indexer service plays a crucial role in understanding the makeup of a container image. In Clair, container image representations called "manifests." Manifests are used to comprehend the contents of the image's layers. To streamline this process, Clair takes advantage of the fact that Open Container Initiative (OCI) manifests and layers are designed for content addressing, reducing repetitive tasks. During indexing, a manifest that represents a container image is taken and broken down into its essential components. The indexer's job is to uncover the image's contained packages, its origin distribution, and the package repositories it relies on. This valuable information is then recorded and stored within Clair's database. The insights gathered during indexing serve as the basis for generating a comprehensive vulnerability report. This report can be seamlessly transferred to a matcher node for further analysis and action, helping users make informed decisions about their container images' security. The IndexReport is stored in Clair's database. It can be fed to a matcher node to compute the vulnerability report. 2.1.2. Matching With Clair, a matcher node is responsible for matching vulnerabilities to a provided index report. Matchers are responsible for keeping the database of vulnerabilities up to date. Matchers run a set of updaters, which periodically probe their data sources for new content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to always provide the most recent vulnerability report when queried. The vulnerability report summarizes both a manifest's content and any vulnerabilities affecting the content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to be used often. It is designed to always provide the most recent VulnerabilityReport when queried. The VulnerabilityReport summarizes both a manifest's content and any vulnerabilities affecting the content. 2.1.3. Notifier service Clair uses a notifier service that keeps track of new security database updates and informs users if new or removed vulnerabilities affect an indexed manifest. When the notifier becomes aware of new vulnerabilities affecting a previously indexed manifest, it uses the configured methods in your config.yaml file to issue notifications about the new changes. Returned notifications express the most severe vulnerability discovered because of the change. This avoids creating excessive notifications for the same security database update. When a user receives a notification, it issues a new request against the matcher to receive an up to date vulnerability report. You can subscribe to notifications through the following mechanics: Webhook delivery AMQP delivery STOMP delivery Configuring the notifier is done through the Clair YAML configuration file. 2.2. Clair authentication In its current iteration, Clair v4 (Clair) handles authentication internally. Note versions of Clair used JWT Proxy to gate authentication. Authentication is configured by specifying configuration objects underneath the auth key of the configuration. Multiple authentication configurations might be present, but they are used preferentially in the following order: PSK. With this authentication configuration, Clair implements JWT-based authentication using a pre-shared key. Configuration. For example: auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer' In this configuration the auth field requires two parameters: iss , which is the issuer to validate all incoming requests, and key , which is a base64 coded symmetric key for validating the requests. 2.3. Clair updaters Clair uses Go packages called updaters that contain the logic of fetching and parsing different vulnerability databases. Updaters are usually paired with a matcher to interpret if, and how, any vulnerability is related to a package. Administrators might want to update the vulnerability database less frequently, or not import vulnerabilities from databases that they know will not be used. 2.4. Information about Clair updaters The following table provides details about each Clair updater, including the configuration parameter, a brief description, relevant URLs, and the associated components that they interact with. This list is not exhaustive, and some servers might issue redirects, while certain request URLs are dynamically constructed to ensure accurate vulnerability data retrieval. For Clair, each updater is responsible for fetching and parsing vulnerability data related to a specific package type or distribution. For example, the Debian updater focuses on Debian-based Linux distributions, while the AWS updater focuses on vulnerabilities specific to Amazon Web Services' Linux distributions. Understanding the package type is important for vulnerability management because different package types might have unique security concerns and require specific updates and patches. Note If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. Use the following table to add updater URLs to your proxy allowlist. Table 2.1. Clair updater information Updater Description URLs Component alpine The Alpine updater is responsible for fetching and parsing vulnerability data related to packages in Alpine Linux distributions. https://secdb.alpinelinux.org/ Alpine Linux SecDB database aws The AWS updater is focused on AWS Linux-based packages, ensuring that vulnerability information specific to Amazon Web Services' custom Linux distributions is kept up-to-date. http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list https://cdn.amazonlinux.com/al2023/core/mirrors/latest/x86_64/mirror.list Amazon Web Services (AWS) UpdateInfo debian The Debian updater is essential for tracking vulnerabilities in packages associated with Debian-based Linux distributions. https://deb.debian.org/ https://security-tracker.debian.org/tracker/data/json Debian Security Tracker clair.cvss The Clair Common Vulnerability Scoring System (CVSS) updater focuses on maintaining data about vulnerabilities and their associated CVSS scores. This is not tied to a specific package type but rather to the severity and risk assessment of vulnerabilities in general. https://nvd.nist.gov/feeds/json/cve/1.1/ National Vulnerability Database (NVD) feed for Common Vulnerabilities and Exposures (CVE) data in JSON format oracle The Oracle updater is dedicated to Oracle Linux packages, maintaining data on vulnerabilities that affect Oracle Linux systems. https://linux.oracle.com/security/oval/com.oracle.elsa-*.xml.bz2 Oracle Oval database photon The Photon updater deals with packages in VMware Photon OS. https://packages.vmware.com/photon/photon_oval_definitions/ VMware Photon OS oval definitions rhel The Red Hat Enterprise Linux (RHEL) updater is responsible for maintaining vulnerability data for packages in Red Hat's Enterprise Linux distribution. https://access.redhat.com/security/cve/ https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST Red Hat Enterprise Linux (RHEL) Oval database rhcc The Red Hat Container Catalog (RHCC) updater is connected to Red Hat's container images. This updater ensures that vulnerability information related to Red Hat's containerized software is kept current. https://access.redhat.com/security/data/metrics/cvemap.xml Resource Handler Configuration Controller (RHCC) database suse The SUSE updater manages vulnerability information for packages in the SUSE Linux distribution family, including openSUSE, SUSE Enterprise Linux, and others. https://support.novell.com/security/oval/ SUSE Oval database ubuntu The Ubuntu updater is dedicated to tracking vulnerabilities in packages associated with Ubuntu-based Linux distributions. Ubuntu is a popular distribution in the Linux ecosystem. https://security-metadata.canonical.com/oval/com.ubuntu.*.cve.oval.xml https://api.launchpad.net/1.0/ Ubuntu Oval Database osv The Open Source Vulnerability (OSV) updater specializes in tracking vulnerabilities within open source software components. OSV is a critical resource that provides detailed information about security issues found in various open source projects. https://osv-vulnerabilities.storage.googleapis.com/ Open Source Vulnerabilities database 2.5. Configuring updaters Updaters can be configured by the updaters.sets key in your clair-config.yaml file. Important If the sets field is not populated, it defaults to using all sets. In using all sets, Clair tries to reach the URL or URLs of each updater. If you are using a proxy environment, you must add these URLs to your proxy allowlist. If updaters are being run automatically within the matcher process, which is the default setting, the period for running updaters is configured under the matcher's configuration field. 2.5.1. Selecting specific updater sets Use the following references to select one, or multiple, updaters for your Red Hat Quay deployment. Configuring Clair for multiple updaters Multiple specific updaters #... updaters: sets: - alpine - aws - osv #... Configuring Clair for Alpine Alpine config.yaml example #... updaters: sets: - alpine #... Configuring Clair for AWS AWS config.yaml example #... updaters: sets: - aws #... Configuring Clair for Debian Debian config.yaml example #... updaters: sets: - debian #... Configuring Clair for Clair CVSS Clair CVSS config.yaml example #... updaters: sets: - clair.cvss #... Configuring Clair for Oracle Oracle config.yaml example #... updaters: sets: - oracle #... Configuring Clair for Photon Photon config.yaml example #... updaters: sets: - photon #... Configuring Clair for SUSE SUSE config.yaml example #... updaters: sets: - suse #... Configuring Clair for Ubuntu Ubuntu config.yaml example #... updaters: sets: - ubuntu #... Configuring Clair for OSV OSV config.yaml example #... updaters: sets: - osv #... 2.5.2. Selecting updater sets for full Red Hat Enterprise Linux (RHEL) coverage For full coverage of vulnerabilities in Red Hat Enterprise Linux (RHEL), you must use the following updater sets: rhel . This updater ensures that you have the latest information on the vulnerabilities that affect RHEL. rhcc . This updater keeps track of vulnerabilities related to Red hat's container images. clair.cvss . This updater offers a comprehensive view of the severity and risk assessment of vulnerabilities by providing Common Vulnerabilities and Exposures (CVE) scores. osv . This updater focuses on tracking vulnerabilities in open-source software components. This updater is recommended due to how common the use of Java and Go are in RHEL products. RHEL updaters example #... updaters: sets: - rhel - rhcc - clair.cvss - osv #... 2.5.3. Advanced updater configuration In some cases, users might want to configure updaters for specific behavior, for example, if you want to allowlist specific ecosystems for the Open Source Vulnerabilities (OSV) updaters. Advanced updater configuration might be useful for proxy deployments or air gapped deployments. Configuration for specific updaters in these scenarios can be passed by putting a key underneath the config environment variable of the updaters object. Users should examine their Clair logs to double-check names. The following YAML snippets detail the various settings available to some Clair updater Important For more users, advanced updater configuration is unnecessary. Configuring the alpine updater #... updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #... Configuring the debian updater #... updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #... Configuring the clair.cvss updater #... updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #... Configuring the oracle updater #... updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #... Configuring the photon updater #... updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #... Configuring the rhel updater #... updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #... 1 Boolean. Whether to include information about vulnerabilities that do not have corresponding patches or updates available. Configuring the rhcc updater #... updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #... Configuring the suse updater #... updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #... Configuring the ubuntu updater #... updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #... 1 Used to force the inclusion of specific distribution and version details in the resulting UpdaterSet, regardless of their status in the API response. Useful when you want to ensure that particular distributions and versions are consistently included in your updater configuration. 2 Specifies the distribution name that you want to force to be included in the UpdaterSet. 3 Specifies the version of the distribution you want to force into the UpdaterSet. Configuring the osv updater #... updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #... 1 The list of ecosystems to allow. When left unset, all ecosystems are allowed. Must be lowercase. For a list of supported ecosystems, see the documentation for defined ecosystems . 2.5.4. Disabling the Clair Updater component In some scenarios, users might want to disable the Clair updater component. Disabling updaters is required when running Red Hat Quay in a disconnected environment. In the following example, Clair updaters are disabled: #... matcher: disable_updaters: true #... 2.6. CVE ratings from the National Vulnerability Database As of Clair v4.2, Common Vulnerability Scoring System (CVSS) enrichment data is now viewable in the Red Hat Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities. With this change, if the vulnerability has a CVSS score that is within 2 levels of the distribution score, the Red Hat Quay UI present's the distribution's score by default. For example: This differs from the interface, which would only display the following information: 2.7. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 2.7.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are deploying Red Hat Quay on OpenShift Container Platform, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. If you are using the Red Hat Quay on OpenShift Container Platform on an IBM Power or IBM Z cluster: OpenShift Container Platform version 4.14 or later is required Red Hat Quay version 3.10 or later is required You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions.
[ "auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'", "# updaters: sets: - alpine - aws - osv #", "# updaters: sets: - alpine #", "# updaters: sets: - aws #", "# updaters: sets: - debian #", "# updaters: sets: - clair.cvss #", "# updaters: sets: - oracle #", "# updaters: sets: - photon #", "# updaters: sets: - suse #", "# updaters: sets: - ubuntu #", "# updaters: sets: - osv #", "# updaters: sets: - rhel - rhcc - clair.cvss - osv #", "# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #", "# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #", "# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #", "# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #", "# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #", "# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #", "# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #", "# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #", "# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #", "# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #", "# matcher: disable_updaters: true #", "--- FEATURE_FIPS = true ---" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-concepts
1.2. KVM Performance Architecture Overview
1.2. KVM Performance Architecture Overview The following points provide a brief overview of KVM as it pertains to system performance, as well as process and thread management: When using KVM, guests run as a Linux processes on the host. Virtual CPUs (vCPUs) are implemented as normal threads, handled by the Linux scheduler. Guests do not automatically inherit features such as NUMA and huge pages from the kernel. Disk and network I/O settings in the host have a significant performance impact. Network traffic typically travels through a software-based bridge. Depending on the devices and their models, there might be significant overhead due to emulation of that particular hardware.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-introduction-kvm_architecture_overview
Chapter 3. Controlling pod placement onto nodes (scheduling)
Chapter 3. Controlling pod placement onto nodes (scheduling) 3.1. Controlling pod placement using the scheduler Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API. Default pod scheduling OpenShift Container Platform comes with a default scheduler that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod. Advanced pod scheduling In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod by. Using pod affinity and anti-affinity rules . Controlling pod placement with pod affinity . Controlling pod placement with node affinity . Placing pods on overcomitted nodes . Controlling pod placement with node selectors . Controlling pod placement with taints and tolerations . 3.1.1. Scheduler Use Cases One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies. 3.1.1.1. Infrastructure Topological Levels Administrators can define multiple topological levels for their infrastructure (nodes) by specifying labels on nodes. For example: region=r1 , zone=z1 , rack=s1 . These label names have no particular meaning and administrators are free to name their infrastructure levels anything, such as city/building/room. Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (such as: regions zones racks ). Administrators can specify affinity and anti-affinity rules at each of these levels in any combination. 3.1.1.2. Affinity Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service are scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod is not scheduled. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 3.1.1.3. Anti-Affinity Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-affinity (or 'spread') at a particular level indicates that all pods that belong to the same service are spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler tries to balance the service pods across all applicable nodes as evenly as possible. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 3.2. Configuring the default scheduler to control pod placement The default OpenShift Container Platform pod scheduler is responsible for determining placement of new pods onto nodes within the cluster. It reads data from the pod and tries to find a node that is a good fit based on configured policies. It is completely independent and exists as a standalone/pluggable solution. It does not modify the pod and just creates a binding for the pod that ties the pod to the particular node. Important Configuring a scheduler policy is deprecated and is planned for removal in a future release. For more information on the alternative, see Scheduling pods using a scheduler profile . A selection of predicates and priorities defines the policy for the scheduler. See Modifying scheduler policy for a list of predicates and priorities. Sample default scheduler object apiVersion: config.openshift.io/v1 kind: Scheduler metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: 2019-05-20T15:39:01Z generation: 1 name: cluster resourceVersion: "1491" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: 6435dd99-7b15-11e9-bd48-0aec821b8e34 spec: policy: 1 name: scheduler-policy defaultNodeSelector: type=user-node,region=east 2 1 You can specify the name of a custom scheduler policy file. 2 Optional: Specify a default node selector to restrict pod placement to specific nodes. The default node selector is applied to the pods created in all namespaces. Pods can be scheduled on nodes with labels that match the default node selector and any existing pod node selectors. Namespaces having project-wide node selectors are not impacted even if this field is set. 3.2.1. Understanding default scheduling The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation: Filters the Nodes The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates . Prioritize the Filtered List of Nodes This is achieved by passing each node through a series of priority_ functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each priority function. The node score provided by each priority function is multiplied by the weight (default weight for most priorities is 1) and then combined by adding the scores for each node provided by all the priorities. This weight attribute can be used by administrators to give higher importance to some priorities. Select the Best Fit Node The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random. 3.2.1.1. Understanding Scheduler Policy The selection of the predicate and priorities defines the policy for the scheduler. The scheduler configuration file is a JSON file, which must be named policy.cfg , that specifies the predicates and priorities the scheduler will consider. In the absence of the scheduler policy file, the default scheduler behavior is used. Important The predicates and priorities defined in the scheduler configuration file completely override the default scheduler policy. If any of the default predicates and priorities are required, you must explicitly specify the functions in the policy configuration. Sample scheduler config map apiVersion: v1 data: policy.cfg: | { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ {"name" : "MaxGCEPDVolumeCount"}, {"name" : "GeneralPredicates"}, 1 {"name" : "MaxAzureDiskVolumeCount"}, {"name" : "MaxCSIVolumeCountPred"}, {"name" : "CheckVolumeBinding"}, {"name" : "MaxEBSVolumeCount"}, {"name" : "MatchInterPodAffinity"}, {"name" : "CheckNodeUnschedulable"}, {"name" : "NoDiskConflict"}, {"name" : "NoVolumeZoneConflict"}, {"name" : "PodToleratesNodeTaints"} ], "priorities" : [ {"name" : "LeastRequestedPriority", "weight" : 1}, {"name" : "BalancedResourceAllocation", "weight" : 1}, {"name" : "ServiceSpreadingPriority", "weight" : 1}, {"name" : "NodePreferAvoidPodsPriority", "weight" : 1}, {"name" : "NodeAffinityPriority", "weight" : 1}, {"name" : "TaintTolerationPriority", "weight" : 1}, {"name" : "ImageLocalityPriority", "weight" : 1}, {"name" : "SelectorSpreadPriority", "weight" : 1}, {"name" : "InterPodAffinityPriority", "weight" : 1}, {"name" : "EqualPriority", "weight" : 1} ] } kind: ConfigMap metadata: creationTimestamp: "2019-09-17T08:42:33Z" name: scheduler-policy namespace: openshift-config resourceVersion: "59500" selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy uid: 17ee8865-d927-11e9-b213-02d1e1709840` 1 The GeneralPredicates predicate represents the PodFitsResources , HostName , PodFitsHostPorts , and MatchNodeSelector predicates. Because you are not allowed to configure the same predicate multiple times, the GeneralPredicates predicate cannot be used alongside any of the four represented predicates. 3.2.2. Creating a scheduler policy file You can change the default scheduling behavior by creating a JSON file with the desired predicates and priorities. You then generate a config map from the JSON file and point the cluster Scheduler object to use the config map. Procedure To configure the scheduler policy: Create a JSON file named policy.cfg with the desired predicates and priorities. Sample scheduler JSON file { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ 1 {"name" : "MaxGCEPDVolumeCount"}, {"name" : "GeneralPredicates"}, {"name" : "MaxAzureDiskVolumeCount"}, {"name" : "MaxCSIVolumeCountPred"}, {"name" : "CheckVolumeBinding"}, {"name" : "MaxEBSVolumeCount"}, {"name" : "MatchInterPodAffinity"}, {"name" : "CheckNodeUnschedulable"}, {"name" : "NoDiskConflict"}, {"name" : "NoVolumeZoneConflict"}, {"name" : "PodToleratesNodeTaints"} ], "priorities" : [ 2 {"name" : "LeastRequestedPriority", "weight" : 1}, {"name" : "BalancedResourceAllocation", "weight" : 1}, {"name" : "ServiceSpreadingPriority", "weight" : 1}, {"name" : "NodePreferAvoidPodsPriority", "weight" : 1}, {"name" : "NodeAffinityPriority", "weight" : 1}, {"name" : "TaintTolerationPriority", "weight" : 1}, {"name" : "ImageLocalityPriority", "weight" : 1}, {"name" : "SelectorSpreadPriority", "weight" : 1}, {"name" : "InterPodAffinityPriority", "weight" : 1}, {"name" : "EqualPriority", "weight" : 1} ] } 1 Add the predicates as needed. 2 Add the priorities as needed. Create a config map based on the scheduler JSON file: USD oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> 1 1 Enter a name for the config map. For example: USD oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy Example output configmap/scheduler-policy created Tip You can alternatively apply the following YAML to create the config map: kind: ConfigMap apiVersion: v1 metadata: name: scheduler-policy namespace: openshift-config data: 1 policy.cfg: | { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RequireRegion", "argument": { "labelPreference": {"label": "region"}, {"presence": true} } } ], "priorities": [ { "name":"ZonePreferred", "weight" : 1, "argument": { "labelPreference": {"label": "zone"}, {"presence": true} } } ] } 1 The policy.cfg file in JSON format with predicates and priorities. Edit the Scheduler Operator custom resource to add the config map: USD oc patch Scheduler cluster --type='merge' -p '{"spec":{"policy":{"name":"<configmap-name>"}}}' --type=merge 1 1 Specify the name of the config map. For example: USD oc patch Scheduler cluster --type='merge' -p '{"spec":{"policy":{"name":"scheduler-policy"}}}' --type=merge Tip You can alternatively apply the following YAML to add the config map: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: mastersSchedulable: false policy: name: scheduler-policy 1 1 Add the name of the scheduler policy config map. After making the change to the Scheduler config resource, wait for the openshift-kube-apiserver pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect. Verify the scheduler policy is configured by viewing the log of a scheduler pod in the openshift-kube-scheduler namespace. The following command checks for the predicates and priorities that are being registered by the scheduler: USD oc logs <scheduler-pod> | grep predicates For example: USD oc logs openshift-kube-scheduler-ip-10-0-141-29.ec2.internal | grep predicates Example output Creating scheduler with fit predicates 'map[MaxGCEPDVolumeCount:{} MaxAzureDiskVolumeCount:{} CheckNodeUnschedulable:{} NoDiskConflict:{} NoVolumeZoneConflict:{} GeneralPredicates:{} MaxCSIVolumeCountPred:{} CheckVolumeBinding:{} MaxEBSVolumeCount:{} MatchInterPodAffinity:{} PodToleratesNodeTaints:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} ServiceSpreadingPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} EqualPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]' 3.2.3. Modifying scheduler policies You change scheduling behavior by creating or editing your scheduler policy config map in the openshift-config project. Add and remove predicates and priorities to the config map to create a scheduler policy . Procedure To modify the current custom scheduling, use one of the following methods: Edit the scheduler policy config map: USD oc edit configmap <configmap-name> -n openshift-config For example: USD oc edit configmap scheduler-policy -n openshift-config Example output apiVersion: v1 data: policy.cfg: | { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ 1 {"name" : "MaxGCEPDVolumeCount"}, {"name" : "GeneralPredicates"}, {"name" : "MaxAzureDiskVolumeCount"}, {"name" : "MaxCSIVolumeCountPred"}, {"name" : "CheckVolumeBinding"}, {"name" : "MaxEBSVolumeCount"}, {"name" : "MatchInterPodAffinity"}, {"name" : "CheckNodeUnschedulable"}, {"name" : "NoDiskConflict"}, {"name" : "NoVolumeZoneConflict"}, {"name" : "PodToleratesNodeTaints"} ], "priorities" : [ 2 {"name" : "LeastRequestedPriority", "weight" : 1}, {"name" : "BalancedResourceAllocation", "weight" : 1}, {"name" : "ServiceSpreadingPriority", "weight" : 1}, {"name" : "NodePreferAvoidPodsPriority", "weight" : 1}, {"name" : "NodeAffinityPriority", "weight" : 1}, {"name" : "TaintTolerationPriority", "weight" : 1}, {"name" : "ImageLocalityPriority", "weight" : 1}, {"name" : "SelectorSpreadPriority", "weight" : 1}, {"name" : "InterPodAffinityPriority", "weight" : 1}, {"name" : "EqualPriority", "weight" : 1} ] } kind: ConfigMap metadata: creationTimestamp: "2019-09-17T17:44:19Z" name: scheduler-policy namespace: openshift-config resourceVersion: "15370" selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy 1 Add or remove predicates as needed. 2 Add, remove, or change the weight of predicates as needed. It can take a few minutes for the scheduler to restart the pods with the updated policy. Change the policies and predicates being used: Remove the scheduler policy config map: USD oc delete configmap -n openshift-config <name> For example: USD oc delete configmap -n openshift-config scheduler-policy Edit the policy.cfg file to add and remove policies and predicates as needed. For example: USD vi policy.cfg Example output apiVersion: v1 data: policy.cfg: | { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ {"name" : "MaxGCEPDVolumeCount"}, {"name" : "GeneralPredicates"}, {"name" : "MaxAzureDiskVolumeCount"}, {"name" : "MaxCSIVolumeCountPred"}, {"name" : "CheckVolumeBinding"}, {"name" : "MaxEBSVolumeCount"}, {"name" : "MatchInterPodAffinity"}, {"name" : "CheckNodeUnschedulable"}, {"name" : "NoDiskConflict"}, {"name" : "NoVolumeZoneConflict"}, {"name" : "PodToleratesNodeTaints"} ], "priorities" : [ {"name" : "LeastRequestedPriority", "weight" : 1}, {"name" : "BalancedResourceAllocation", "weight" : 1}, {"name" : "ServiceSpreadingPriority", "weight" : 1}, {"name" : "NodePreferAvoidPodsPriority", "weight" : 1}, {"name" : "NodeAffinityPriority", "weight" : 1}, {"name" : "TaintTolerationPriority", "weight" : 1}, {"name" : "ImageLocalityPriority", "weight" : 1}, {"name" : "SelectorSpreadPriority", "weight" : 1}, {"name" : "InterPodAffinityPriority", "weight" : 1}, {"name" : "EqualPriority", "weight" : 1} ] } Re-create the scheduler policy config map based on the scheduler JSON file: USD oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> 1 1 Enter a name for the config map. For example: USD oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy Example output configmap/scheduler-policy created Example 3.1. Sample config map based on the scheduler JSON file kind: ConfigMap apiVersion: v1 metadata: name: scheduler-policy namespace: openshift-config data: policy.cfg: | { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RequireRegion", "argument": { "labelPreference": {"label": "region"}, {"presence": true} } } ], "priorities": [ { "name":"ZonePreferred", "weight" : 1, "argument": { "labelPreference": {"label": "zone"}, {"presence": true} } } ] } 3.2.3.1. Understanding the scheduler predicates Predicates are rules that filter out unqualified nodes. There are several predicates provided by default in OpenShift Container Platform. Some of these predicates can be customized by providing certain parameters. Multiple predicates can be combined to provide additional filtering of nodes. 3.2.3.1.1. Static Predicates These predicates do not take any configuration parameters or inputs from the user. These are specified in the scheduler configuration using their exact name. 3.2.3.1.1.1. Default Predicates The default scheduler policy includes the following predicates: The NoVolumeZoneConflict predicate checks that the volumes a pod requests are available in the zone. {"name" : "NoVolumeZoneConflict"} The MaxEBSVolumeCount predicate checks the maximum number of volumes that can be attached to an AWS instance. {"name" : "MaxEBSVolumeCount"} The MaxAzureDiskVolumeCount predicate checks the maximum number of Azure Disk Volumes. {"name" : "MaxAzureDiskVolumeCount"} The PodToleratesNodeTaints predicate checks if a pod can tolerate the node taints. {"name" : "PodToleratesNodeTaints"} The CheckNodeUnschedulable predicate checks if a pod can be scheduled on a node with Unschedulable spec. {"name" : "CheckNodeUnschedulable"} The CheckVolumeBinding predicate evaluates if a pod can fit based on the volumes, it requests, for both bound and unbound PVCs. For PVCs that are bound, the predicate checks that the corresponding PV's node affinity is satisfied by the given node. For PVCs that are unbound, the predicate searched for available PVs that can satisfy the PVC requirements and that the PV node affinity is satisfied by the given node. The predicate returns true if all bound PVCs have compatible PVs with the node, and if all unbound PVCs can be matched with an available and node-compatible PV. {"name" : "CheckVolumeBinding"} The NoDiskConflict predicate checks if the volume requested by a pod is available. {"name" : "NoDiskConflict"} The MaxGCEPDVolumeCount predicate checks the maximum number of Google Compute Engine (GCE) Persistent Disks (PD). {"name" : "MaxGCEPDVolumeCount"} The MaxCSIVolumeCountPred predicate determines how many Container Storage Interface (CSI) volumes should be attached to a node and whether that number exceeds a configured limit. {"name" : "MaxCSIVolumeCountPred"} The MatchInterPodAffinity predicate checks if the pod affinity/anti-affinity rules permit the pod. {"name" : "MatchInterPodAffinity"} 3.2.3.1.1.2. Other Static Predicates OpenShift Container Platform also supports the following predicates: Note The CheckNode-* predicates cannot be used if the Taint Nodes By Condition feature is enabled. The Taint Nodes By Condition feature is enabled by default. The CheckNodeCondition predicate checks if a pod can be scheduled on a node reporting out of disk , network unavailable , or not ready conditions. {"name" : "CheckNodeCondition"} The CheckNodeLabelPresence predicate checks if all of the specified labels exist on a node, regardless of their value. {"name" : "CheckNodeLabelPresence"} The checkServiceAffinity predicate checks that ServiceAffinity labels are homogeneous for pods that are scheduled on a node. {"name" : "checkServiceAffinity"} The PodToleratesNodeNoExecuteTaints predicate checks if a pod tolerations can tolerate a node NoExecute taints. {"name" : "PodToleratesNodeNoExecuteTaints"} 3.2.3.1.2. General Predicates The following general predicates check whether non-critical predicates and essential predicates pass. Non-critical predicates are the predicates that only non-critical pods must pass and essential predicates are the predicates that all pods must pass. The default scheduler policy includes the general predicates. Non-critical general predicates The PodFitsResources predicate determines a fit based on resource availability (CPU, memory, GPU, and so forth). The nodes can declare their resource capacities and then pods can specify what resources they require. Fit is based on requested, rather than used resources. {"name" : "PodFitsResources"} Essential general predicates The PodFitsHostPorts predicate determines if a node has free ports for the requested pod ports (absence of port conflicts). {"name" : "PodFitsHostPorts"} The HostName predicate determines fit based on the presence of the Host parameter and a string match with the name of the host. {"name" : "HostName"} The MatchNodeSelector predicate determines fit based on node selector (nodeSelector) queries defined in the pod. {"name" : "MatchNodeSelector"} 3.2.3.2. Understanding the scheduler priorities Priorities are rules that rank nodes according to preferences. A custom set of priorities can be specified to configure the scheduler. There are several priorities provided by default in OpenShift Container Platform. Other priorities can be customized by providing certain parameters. Multiple priorities can be combined and different weights can be given to each to impact the prioritization. 3.2.3.2.1. Static Priorities Static priorities do not take any configuration parameters from the user, except weight. A weight is required to be specified and cannot be 0 or negative. These are specified in the scheduler policy config map in the openshift-config project. 3.2.3.2.1.1. Default Priorities The default scheduler policy includes the following priorities. Each of the priority function has a weight of 1 except NodePreferAvoidPodsPriority , which has a weight of 10000 . The NodeAffinityPriority priority prioritizes nodes according to node affinity scheduling preferences {"name" : "NodeAffinityPriority", "weight" : 1} The TaintTolerationPriority priority prioritizes nodes that have a fewer number of intolerable taints on them for a pod. An intolerable taint is one which has key PreferNoSchedule . {"name" : "TaintTolerationPriority", "weight" : 1} The ImageLocalityPriority priority prioritizes nodes that already have requested pod container's images. {"name" : "ImageLocalityPriority", "weight" : 1} The SelectorSpreadPriority priority looks for services, replication controllers (RC), replication sets (RS), and stateful sets that match the pod, then finds existing pods that match those selectors. The scheduler favors nodes that have fewer existing matching pods. Then, it schedules the pod on a node with the smallest number of pods that match those selectors as the pod being scheduled. {"name" : "SelectorSpreadPriority", "weight" : 1} The InterPodAffinityPriority priority computes a sum by iterating through the elements of weightedPodAffinityTerm and adding weight to the sum if the corresponding PodAffinityTerm is satisfied for that node. The node(s) with the highest sum are the most preferred. {"name" : "InterPodAffinityPriority", "weight" : 1} The LeastRequestedPriority priority favors nodes with fewer requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes nodes that have the highest available/remaining capacity. {"name" : "LeastRequestedPriority", "weight" : 1} The BalancedResourceAllocation priority favors nodes with balanced resource usage rate. It calculates the difference between the consumed CPU and memory as a fraction of capacity, and prioritizes the nodes based on how close the two metrics are to each other. This should always be used together with LeastRequestedPriority . {"name" : "BalancedResourceAllocation", "weight" : 1} The NodePreferAvoidPodsPriority priority ignores pods that are owned by a controller other than a replication controller. {"name" : "NodePreferAvoidPodsPriority", "weight" : 10000} 3.2.3.2.1.2. Other Static Priorities OpenShift Container Platform also supports the following priorities: The EqualPriority priority gives an equal weight of 1 to all nodes, if no priority configurations are provided. We recommend using this priority only for testing environments. {"name" : "EqualPriority", "weight" : 1} The MostRequestedPriority priority prioritizes nodes with most requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes based on the maximum of the average of the fraction of requested to capacity. {"name" : "MostRequestedPriority", "weight" : 1} The ServiceSpreadingPriority priority spreads pods by minimizing the number of pods belonging to the same service onto the same machine. {"name" : "ServiceSpreadingPriority", "weight" : 1} 3.2.3.2.2. Configurable Priorities You can configure these priorities in the scheduler policy config map, in the openshift-config namespace, to add labels to affect how the priorities work. The type of the priority function is identified by the argument that they take. Since these are configurable, multiple priorities of the same type (but different configuration parameters) can be combined as long as their user-defined names are different. For information on using these priorities, see Modifying Scheduler Policy. The ServiceAntiAffinity priority takes a label and ensures a good spread of the pods belonging to the same service across the group of nodes based on the label values. It gives the same score to all nodes that have the same value for the specified label. It gives a higher score to nodes within a group with the least concentration of pods. { "kind": "Policy", "apiVersion": "v1", "priorities":[ { "name":"<name>", 1 "weight" : 1 2 "argument":{ "serviceAntiAffinity":{ "label": "<label>" 3 } } } ] } 1 Specify a name for the priority. 2 Specify a weight. Enter a non-zero positive value. 3 Specify a label to match. For example: { "kind": "Policy", "apiVersion": "v1", "priorities": [ { "name":"RackSpread", "weight" : 1, "argument": { "serviceAntiAffinity": { "label": "rack" } } } ] } Note In some situations using the ServiceAntiAffinity parameter based on custom labels does not spread pod as expected. See this Red Hat Solution . The labelPreference parameter gives priority based on the specified label. If the label is present on a node, that node is given priority. If no label is specified, priority is given to nodes that do not have a label. If multiple priorities with the labelPreference parameter are set, all of the priorities must have the same weight. { "kind": "Policy", "apiVersion": "v1", "priorities":[ { "name":"<name>", 1 "weight" : 1 2 "argument":{ "labelPreference":{ "label": "<label>", 3 "presence": true 4 } } } ] } 1 Specify a name for the priority. 2 Specify a weight. Enter a non-zero positive value. 3 Specify a label to match. 4 Specify whether the label is required, either true or false . 3.2.4. Sample Policy Configurations The configuration below specifies the default scheduler configuration, if it were to be specified using the scheduler policy file. { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RegionZoneAffinity", 1 "argument": { "serviceAffinity": { 2 "labels": ["region, zone"] 3 } } } ], "priorities": [ { "name":"RackSpread", 4 "weight" : 1, "argument": { "serviceAntiAffinity": { 5 "label": "rack" 6 } } } ] } 1 The name for the predicate. 2 The type of predicate. 3 The labels for the predicate. 4 The name for the priority. 5 The type of priority. 6 The labels for the priority. In all of the sample configurations below, the list of predicates and priority functions is truncated to include only the ones that pertain to the use case specified. In practice, a complete/meaningful scheduler policy should include most, if not all, of the default predicates and priorities listed above. The following example defines three topological levels, region (affinity) zone (affinity) rack (anti-affinity): { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RegionZoneAffinity", "argument": { "serviceAffinity": { "labels": ["region, zone"] } } } ], "priorities": [ { "name":"RackSpread", "weight" : 1, "argument": { "serviceAntiAffinity": { "label": "rack" } } } ] } The following example defines three topological levels, city (affinity) building (anti-affinity) room (anti-affinity): { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "CityAffinity", "argument": { "serviceAffinity": { "label": "city" } } } ], "priorities": [ { "name":"BuildingSpread", "weight" : 1, "argument": { "serviceAntiAffinity": { "label": "building" } } }, { "name":"RoomSpread", "weight" : 1, "argument": { "serviceAntiAffinity": { "label": "room" } } } ] } The following example defines a policy to only use nodes with the 'region' label defined and prefer nodes with the 'zone' label defined: { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RequireRegion", "argument": { "labelPreference": { "labels": ["region"], "presence": true } } } ], "priorities": [ { "name":"ZonePreferred", "weight" : 1, "argument": { "labelPreference": { "label": "zone", "presence": true } } } ] } The following example combines both static and configurable predicates and also priorities: { "kind": "Policy", "apiVersion": "v1", "predicates": [ { "name": "RegionAffinity", "argument": { "serviceAffinity": { "labels": ["region"] } } }, { "name": "RequireRegion", "argument": { "labelsPresence": { "labels": ["region"], "presence": true } } }, { "name": "BuildingNodesAvoid", "argument": { "labelsPresence": { "label": "building", "presence": false } } }, {"name" : "PodFitsPorts"}, {"name" : "MatchNodeSelector"} ], "priorities": [ { "name": "ZoneSpread", "weight" : 2, "argument": { "serviceAntiAffinity":{ "label": "zone" } } }, { "name":"ZonePreferred", "weight" : 1, "argument": { "labelPreference":{ "label": "zone", "presence": true } } }, {"name" : "ServiceSpreadingPriority", "weight" : 1} ] } 3.3. Scheduling pods using a scheduler profile You can configure OpenShift Container Platform to use a scheduling profile to schedule pods onto nodes within the cluster. 3.3.1. About scheduler profiles You can specify a scheduler profile to control how pods are scheduled onto nodes. Note Scheduler profiles are an alternative to configuring a scheduler policy. Do not set both a scheduler policy and a scheduler profile. If both are set, the scheduler policy takes precedence. The following scheduler profiles are available: LowNodeUtilization This profile attempts to spread pods evenly across nodes to get low resource usage per node. This profile provides the default scheduler behavior. HighNodeUtilization This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node. NoScoring This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones. 3.3.2. Configuring a scheduler profile You can configure the scheduler to use a scheduler profile. Note Do not set both a scheduler policy and a scheduler profile. If both are set, the scheduler policy takes precedence. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the Scheduler object: USD oc edit scheduler cluster Specify the profile to use in the spec.profile field: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: ... name: cluster resourceVersion: "601" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: b351d6d0-d06f-4a99-a26b-87af62e79f59 spec: mastersSchedulable: false policy: name: "" profile: HighNodeUtilization 1 1 Set to LowNodeUtilization , HighNodeUtilization , or NoScoring . Save the file to apply the changes. 3.4. Placing pods relative to other pods using affinity and anti-affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. 3.4.1. Understanding pod affinity Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod. Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes or availability zones to reduce correlated failures. Note A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods. There are two types of pod affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example shows a Pod spec configured for pod affinity and anti-affinity. In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1 . The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2 . Sample Pod config file with pod affinity apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: failure-domain.beta.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod affinity. 2 Defines a required rule. 3 5 The key and value (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Sample Pod config file with pod anti-affinity apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Note If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. 3.4.2. Configuring a pod affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod. Procedure Create a pod with a specific label in the Pod spec: USD cat team4.yaml apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod When creating other pods, edit the Pod spec as follows: Use the podAffinity stanza to configure the requiredDuringSchedulingIgnoredDuringExecution parameter or preferredDuringSchedulingIgnoredDuringExecution parameter: Specify the key and value that must be met. If you want the new pod to be scheduled with the other pod, use the same key and value parameters as the label on the first pod. podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: failure-domain.beta.kubernetes.io/zone Specify an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 3.4.3. Configuring a pod anti-affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod. Procedure Create a pod with a specific label in the Pod spec: USD cat team4.yaml apiVersion: v1 kind: Pod metadata: name: security-s2 labels: security: S2 spec: containers: - name: security-s2 image: docker.io/ocpqe/hello-pod When creating other pods, edit the Pod spec to set the following parameters: Use the podAntiAffinity stanza to configure the requiredDuringSchedulingIgnoredDuringExecution parameter or preferredDuringSchedulingIgnoredDuringExecution parameter: Specify a weight for the node, 1-100. The node that with highest weight is preferred. Specify the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and value parameters as the label on the first pod. podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S2 topologyKey: kubernetes.io/hostname For a preferred rule, specify a weight, 1-100. Specify an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 3.4.4. Sample pod affinity and anti-affinity rules The following examples demonstrate pod affinity and pod anti-affinity. 3.4.4.1. Pod Affinity The following example demonstrates pod affinity for pods with matching labels and label selectors. The pod team4 has the label team:4 . USD cat team4.yaml apiVersion: v1 kind: Pod metadata: name: team4 labels: team: "4" spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod The pod team4a has the label selector team:4 under podAffinity . USD cat pod-team4a.yaml apiVersion: v1 kind: Pod metadata: name: team4a spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - "4" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod The team4a pod is scheduled on the same node as the team4 pod. 3.4.4.2. Pod Anti-affinity The following example demonstrates pod anti-affinity for pods with matching labels and label selectors. The pod pod-s1 has the label security:s1 . cat pod-s1.yaml apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod The pod pod-s2 has the label selector security:s1 under podAntiAffinity . cat pod-s2.yaml apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod The pod pod-s2 cannot be scheduled on the same node as pod-s1 . 3.4.4.3. Pod Affinity with no Matching Labels The following example demonstrates pod affinity for pods without matching labels and label selectors. The pod pod-s1 has the label security:s1 . USD cat pod-s1.yaml apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod The pod pod-s2 has the label selector security:s2 . USD cat pod-s2.yaml apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state: Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none> 3.5. Controlling pod placement on nodes using node affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. In OpenShift Container Platform node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on the nodes and label selectors specified in pods. 3.5.1. Understanding node affinity Node affinity allows a pod to specify an affinity towards a group of nodes it can be placed on. The node does not have control over the placement. For example, you could configure a pod to only run on a node with a specific CPU or in a specific availability zone. There are two types of node affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node. You configure node affinity through the Pod spec file. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example is a Pod spec with a rule that requires the pod be placed on a node with a label whose key is e2e-az-NorthSouth and whose value is either e2e-az-North or e2e-az-South : Example pod configuration file with a node affinity required rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod 1 The stanza to configure node affinity. 2 Defines a required rule. 3 5 6 The key/value pair (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . The following example is a node specification with a preferred rule that a node with a label whose key is e2e-az-EastWest and whose value is either e2e-az-East or e2e-az-West is preferred for the pod: Example pod configuration file with a node affinity preferred rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod 1 The stanza to configure node affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with highest weight is preferred. 4 6 7 The key/value pair (label) that must be matched to apply the rule. 5 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . There is no explicit node anti-affinity concept, but using the NotIn or DoesNotExist operator replicates that behavior. Note If you are using node affinity and node selectors in the same pod configuration, note the following: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. 3.5.2. Configuring a required node affinity rule Required rules must be met before a pod can be scheduled on a node. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler is required to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az1 Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 In the Pod spec, use the nodeAffinity stanza to configure the requiredDuringSchedulingIgnoredDuringExecution parameter: Specify the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and value parameters as the label in the node. Specify an operator . The operator can be In , NotIn , Exists , DoesNotExist , Lt , or Gt . For example, use the operator In to require the label to be in the node: Example output spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: e2e-az-name operator: In values: - e2e-az1 - e2e-az2 Create the pod: USD oc create -f e2e-az2.yaml 3.5.3. Configuring a preferred node affinity rule Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler tries to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az3 In the Pod spec, use the nodeAffinity stanza to configure the preferredDuringSchedulingIgnoredDuringExecution parameter: Specify a weight for the node, as a number 1-100. The node with highest weight is preferred. Specify the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and value parameters as the label in the node: spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: e2e-az-name operator: In values: - e2e-az3 Specify an operator . The operator can be In , NotIn , Exists , DoesNotExist , Lt , or Gt . For example, use the Operator In to require the label to be in the node. Create the pod. USD oc create -f e2e-az3.yaml 3.5.4. Sample node affinity rules The following examples demonstrate node affinity. 3.5.4.1. Node affinity with matching labels The following example demonstrates node affinity for a node and pod with matching labels: The Node1 node has the label zone:us : USD oc label node node1 zone=us Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us The pod-s1 pod can be scheduled on Node1: USD oc get pod -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1 3.5.4.2. Node affinity with no matching labels The following example demonstrates node affinity for a node and pod without matching labels: The Node1 node has the label zone:emea : USD oc label node node1 zone=emea Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us The pod-s1 pod cannot be scheduled on Node1: USD oc describe pod pod-s1 Example output ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1). 3.5.5. Additional resources For information about changing node labels, see Understanding how to update labels on nodes . 3.6. Placing pods onto overcommited nodes In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off of guaranteed performance for capacity is acceptable. Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. 3.6.1. Understanding overcommitment Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes by configuring masters to override the ratio between request and limit set on developer containers. In conjunction with a per-project LimitRange object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit. Note That these overrides have no effect if no limits have been set on containers. Create a LimitRange object with default limits, per individual project, or in the project template, to ensure that the overrides apply. After these overrides, the container limits and requests must still be validated by any LimitRange object in the project. It is possible, for example, for developers to specify a limit close to the minimum limit, and have the request then be overridden below the minimum limit, causing the pod to be forbidden. This unfortunate user experience should be addressed with future work, but for now, configure this capability and LimitRange objects with caution. 3.6.2. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output vm.overcommit_memory = 1 USD sysctl -a |grep panic Example output vm.panic_on_oom = 0 Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 3.7. Controlling pod placement using node taints Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 3.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification spec: taints: - effect: NoExecute key: key1 value: value1 .... Example toleration in a Pod spec spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 .... Taints and tolerations consist of a key, value, and effect. Table 3.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 3.7.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 3.7.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 3.7.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 3.7.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 3.7.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and value parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints spec: tolerations: - operator: "Exists" 3.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 3.7.2.1. Adding taints and tolerations using a machine set You can add taints to nodes using a machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a machine set specification spec: .... template: .... spec: taints: - effect: NoExecute key: key1 value: value1 .... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 3.7.2.2. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: dedicated value: groupName effect: NoSchedule Add a toleration to the pods by writing a custom admission controller. 3.7.2.3. Creating a project with a node selector and toleration You can create a project that uses a node selector and toleration, which are set as annotations, to control the placement of pods onto specific nodes. Any subsequent resources created in the project are then scheduled on nodes that have a taint matching the toleration. Prerequisites A label for node selection has been added to one or more nodes by using a machine set or editing the node directly. A taint has been added to one or more nodes by using a machine set or editing the node directly. Procedure Create a Project resource definition, specifying a node selector and toleration in the metadata.annotations section: Example project.yaml file kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "<key_name>"} 3 ] 1 The project name. 2 The default node selector label. 3 The toleration parameters, as described in the Taint and toleration components table. This example uses the NoSchedule effect, which allows existing pods on the node to remain, and the Exists operator, which does not take a value. Use the oc apply command to create the project: USD oc apply -f project.yaml Any subsequent resources created in the <project_name> namespace should now be scheduled on the specified nodes. Additional resources Adding taints and tolerations manually to nodes or with machine sets Creating project-wide node selectors Pod placement of Operator workloads 3.7.2.4. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule 3.7.3. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 3.8. Placing pods on specific nodes using node selectors A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 3.8.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod .... spec: nodeSelector: 1 region: east type: user-node 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east ... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 ... labels: region: east type: user-node ... Example Pod object with a node selector apiVersion: v1 kind: Pod ... spec: nodeSelector: region: east ... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" ... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 ... labels: region: east type: user-node ... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region ... spec: nodeSelector: region: east type: user-node ... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod ... spec: nodeSelector: region: west .... 3.8.2. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the Pod spec. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: The web console lists the controlling object under ownerReferences in the pod YAML: Procedure Add labels to a node by using a machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: For example: Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet .... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node .... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.22.1 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet .... spec: .... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod .... spec: nodeSelector: region: east type: user-node Note You cannot add a node selector directly to an existing scheduled pod. 3.8.3. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: "" 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a machine set or editing the node directly: Use a machine set to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1 3.8.4. Creating project-wide node selectors You can use node selectors in a project together with labels on nodes to constrain all pods created in that project to the labeled nodes. When you create a pod in this project, OpenShift Container Platform adds the node selectors to the pods in the project and schedules the pods on a node with matching labels in the project. If there is a cluster-wide default node selector, a project node selector takes preference. You add node selectors to a project by editing the Namespace object to add the openshift.io/node-selector parameter. You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. A pod is not scheduled if the Pod object contains a node selector, but no project has a matching node selector. When you create a pod from that spec, you receive an error similar to the following message: Example error message Error from server (Forbidden): error when creating "pod.yaml": pods "pod-4" is forbidden: pod node label selector conflicts with its project node label selector Note You can add additional key/value pairs to a pod. But you cannot add a different value for a project key. Procedure To add a default project node selector: Create a namespace or edit an existing namespace to add the openshift.io/node-selector parameter: USD oc edit namespace <name> Example output apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "type=user-node,region=east" 1 openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: "2021-05-10T12:35:04Z" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: "145537" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes 1 Add the openshift.io/node-selector with the appropriate <key>:<value> pairs. Add labels to a node by using a machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that machine set: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1 Add labels directly to a node: Edit the Node object to add labels: USD oc label <resource> <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the Node object using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1 Additional resources Creating a project with a node selector and toleration 3.9. Controlling pod placement by using pod topology spread constraints You can use pod topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains. 3.9.1. About pod topology spread constraints By using a pod topology spread constraint , you provide fine-grained control over the distribution of pods across failure domains to help achieve high availability and more efficient resource utilization. OpenShift Container Platform administrators can label nodes to provide topology information, such as regions, zones, nodes, or other user-defined domains. After these labels are set on nodes, users can then define pod topology spread constraints to control the placement of pods across these topology domains. You specify which pods to group together, which topology domains they are spread among, and the acceptable skew. Only pods within the same namespace are matched and grouped together when spreading due to a constraint. 3.9.2. Configuring pod topology spread constraints The following steps demonstrate how to configure pod topology spread constraints to distribute pods that match the specified labels based on their zone. You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed. Prerequisites A cluster administrator has added the required labels to nodes. Procedure Create a Pod spec and specify a pod topology spread constraint: Example pod-spec.yaml file apiVersion: v1 kind: Pod metadata: name: my-pod labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: foo: bar 5 containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 1 The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . 2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology. 3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. 4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. 5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. Create the pod: USD oc create -f pod-spec.yaml 3.9.3. Example pod topology spread constraints The following examples demonstrate pod topology spread constraint configurations. 3.9.3.1. Single pod topology spread constraint example This example Pod spec defines one pod topology spread constraint. It matches on pods labeled foo:bar , distributes among zones, specifies a skew of 1 , and does not schedule the pod if it does not meet these requirements. kind: Pod apiVersion: v1 metadata: name: my-pod labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 3.9.3.2. Multiple pod topology spread constraints example This example Pod spec defines two pod topology spread constraints. Both match on pods labeled foo:bar , specify a skew of 1 , and do not schedule the pod if it does not meet these requirements. The first constraint distributes pods based on a user-defined label node , and the second constraint distributes pods based on a user-defined label rack . Both constraints must be met for the pod to be scheduled. kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 3.9.4. Additional resources Understanding how to update labels on nodes 3.10. Running a custom scheduler You can run multiple custom schedulers alongside the default scheduler and configure which scheduler to use for each pod. Important It is supported to use a custom scheduler with OpenShift Container Platform, but Red Hat does not directly support the functionality of the custom scheduler. For information on how to configure the default scheduler, see Configuring the default scheduler to control pod placement . To schedule a given pod using a specific scheduler, specify the name of the scheduler in that Pod specification . 3.10.1. Deploying a custom scheduler To include a custom scheduler in your cluster, include the image for a custom scheduler in a deployment. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a scheduler binary. Note Information on how to create a scheduler binary is outside the scope of this document. For an example, see Configure Multiple Schedulers in the Kubernetes documentation. Note that the actual functionality of your custom scheduler is not supported by Red Hat. You have created an image containing the scheduler binary and pushed it to a registry. Procedure Create a file that contains a config map that holds the scheduler configuration file: Example scheduler-config-map.yaml apiVersion: v1 kind: ConfigMap metadata: name: scheduler-config namespace: kube-system 1 data: scheduler-config.yaml: | 2 apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: KubeSchedulerConfiguration profiles: - schedulerName: custom-scheduler 3 leaderElection: leaderElect: false 1 This procedure uses the kube-system namespace, but you can use the namespace of your choosing. 2 When you define your Deployment resource later in this procedure, you pass this file in to the scheduler command by using the --config argument. 3 Define a scheduler profile for your custom scheduler. This scheduler name is used when defining the schedulerName in the Pod configuration. Create the config map: USD oc create -f scheduler-config-map.yaml Create a file that contains the deployment resources for the custom scheduler: Example custom-scheduler.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: custom-scheduler namespace: kube-system 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-scheduler-as-kube-scheduler subjects: - kind: ServiceAccount name: custom-scheduler namespace: kube-system 2 roleRef: kind: ClusterRole name: system:kube-scheduler apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-scheduler-as-volume-scheduler subjects: - kind: ServiceAccount name: custom-scheduler namespace: kube-system 3 roleRef: kind: ClusterRole name: system:volume-scheduler apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: labels: component: scheduler tier: control-plane name: custom-scheduler namespace: kube-system 4 spec: selector: matchLabels: component: scheduler tier: control-plane replicas: 1 template: metadata: labels: component: scheduler tier: control-plane version: second spec: serviceAccountName: custom-scheduler containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/config/scheduler-config.yaml 5 image: "<namespace>/<image_name>:<tag>" 6 livenessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS initialDelaySeconds: 15 name: kube-second-scheduler readinessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS resources: requests: cpu: '0.1' securityContext: privileged: false volumeMounts: - name: config-volume mountPath: /etc/config hostNetwork: false hostPID: false volumes: - name: config-volume configMap: name: scheduler-config 1 2 3 4 This procedure uses the kube-system namespace, but you can use the namespace of your choosing. 5 The command for your custom scheduler might require different arguments. 6 Specify the container image that you created for the custom scheduler. Create the deployment resources in the cluster: USD oc create -f custom-scheduler.yaml Verification Verify that the scheduler pod is running: USD oc get pods -n kube-system The custom scheduler pod is listed as Running : NAME READY STATUS RESTARTS AGE custom-scheduler-6cd7c4b8bc-854zb 1/1 Running 0 2m 3.10.2. Deploying pods using a custom scheduler After the custom scheduler is deployed in your cluster, you can configure pods to use that scheduler instead of the default scheduler. Note Each scheduler has a separate view of resources in a cluster. For that reason, each scheduler should operate over its own set of nodes. If two or more schedulers operate on the same node, they might intervene with each other and schedule more pods on the same node than there are available resources for. Pods might get rejected due to insufficient resources in this case. Prerequisites You have access to the cluster as a user with the cluster-admin role. The custom scheduler has been deployed in the cluster. Procedure If your cluster uses role-based access control (RBAC), add the custom scheduler name to the system:kube-scheduler cluster role. Edit the system:kube-scheduler cluster role: USD oc edit clusterrole system:kube-scheduler Add the name of the custom scheduler to the resourceNames lists for the leases and endpoints resources: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2021-07-07T10:19:14Z" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-scheduler resourceVersion: "125" uid: 53896c70-b332-420a-b2a4-f72c822313f2 rules: ... - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - apiGroups: - coordination.k8s.io resourceNames: - kube-scheduler - custom-scheduler 1 resources: - leases verbs: - get - update - apiGroups: - "" resources: - endpoints verbs: - create - apiGroups: - "" resourceNames: - kube-scheduler - custom-scheduler 2 resources: - endpoints verbs: - get - update ... 1 2 This example uses custom-scheduler as the custom scheduler name. Create a Pod configuration and specify the name of the custom scheduler in the schedulerName parameter: Example custom-scheduler-example.yaml file apiVersion: v1 kind: Pod metadata: name: custom-scheduler-example labels: name: custom-scheduler-example spec: schedulerName: custom-scheduler 1 containers: - name: pod-with-second-annotation-container image: docker.io/ocpqe/hello-pod 1 The name of the custom scheduler to use, which is custom-scheduler in this example. When no scheduler name is supplied, the pod is automatically scheduled using the default scheduler. Create the pod: USD oc create -f custom-scheduler-example.yaml Verification Enter the following command to check that the pod was created: USD oc get pod custom-scheduler-example The custom-scheduler-example pod is listed in the output: NAME READY STATUS RESTARTS AGE custom-scheduler-example 1/1 Running 0 4m Enter the following command to check that the custom scheduler has scheduled the pod: USD oc describe pod custom-scheduler-example The scheduler, custom-scheduler , is listed as shown in the following truncated output: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> custom-scheduler Successfully assigned default/custom-scheduler-example to <node_name> 3.10.3. Additional resources Learning container best practices 3.11. Evicting pods using the descheduler While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. 3.11.1. About the descheduler You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. You can benefit from descheduling running pods in situations such as the following: Nodes are underutilized or overutilized. Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. Node failure requires pods to be moved. New nodes are added to clusters. Pods have been restarted too many times. Important The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. When the descheduler decides to evict pods from a node, it employs the following general mechanism: Pods in the openshift-* and kube-system namespaces are never evicted. Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted. Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, or job are never evicted because these pods will not be recreated. Pods associated with daemon sets are never evicted. Pods with local storage are never evicted. Best effort pods are evicted before burstable and guaranteed pods. All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are eligible for eviction. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. 3.11.2. Descheduler profiles The following descheduler profiles are available: AffinityAndTaints This profile evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. It enables the following strategies: RemovePodsViolatingInterPodAntiAffinity : removes pods that are violating inter-pod anti-affinity. RemovePodsViolatingNodeAffinity : removes pods that are violating node affinity. RemovePodsViolatingNodeTaints : removes pods that are violating NoSchedule taints on nodes. Pods with a node affinity type of requiredDuringSchedulingIgnoredDuringExecution are removed. TopologyAndDuplicates This profile evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes. It enables the following strategies: RemovePodsViolatingTopologySpreadConstraint : finds unbalanced topology domains and tries to evict pods from larger ones when DoNotSchedule constraints are violated. RemoveDuplicates : ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, those duplicate pods are evicted for better pod distribution in a cluster. LifecycleAndUtilization This profile evicts long-running pods and balances resource usage between nodes. It enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times. Pods where the sum of restarts over all containers (including Init Containers) is more than 100. LowNodeUtilization : finds nodes that are underutilized and evicts pods, if possible, from overutilized nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). PodLifeTime : evicts pods that are too old. By default, pods that are older than 24 hours are removed. You can customize the pod lifetime value. SoftTopologyAndDuplicates This profile is the same as TopologyAndDuplicates , except that pods with soft topology constraints, such as whenUnsatisfiable: ScheduleAnyway , are also considered for eviction. Note Do not enable both SoftTopologyAndDuplicates and TopologyAndDuplicates . Enabling both results in a conflict. EvictPodsWithLocalStorage This profile allows pods with local storage to be eligible for eviction. EvictPodsWithPVC This profile allows pods with persistent volume claims to be eligible for eviction. 3.11.3. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. Expand the Profiles section to select one or more profiles to enable. The AffinityAndTaints profile is enabled by default. Click Add Profile to select additional profiles. Note Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. Optional: Expand the Profile Customizations section to set a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. Optional: Use the Descheduling Interval Seconds field to change the number of seconds between descheduler runs. The default is 3600 seconds. Click Create . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). If you did not adjust the profiles when creating the descheduler instance from the web console, the AffinityAndTaints profile is enabled by default. 3.11.4. Configuring descheduler profiles You can configure which profiles the descheduler uses to evict pods. Prerequisites Cluster administrator privileges Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Specify one or more profiles in the spec.profiles section. apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal profileCustomizations: podLifetime: 48h 1 profiles: 2 - AffinityAndTaints - TopologyAndDuplicates 3 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC 1 Optional: Enable a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. 2 Add one or more profiles to enable. Available profiles: AffinityAndTaints , TopologyAndDuplicates , LifecycleAndUtilization , SoftTopologyAndDuplicates , EvictPodsWithLocalStorage , and EvictPodsWithPVC . 3 Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. You can enable multiple profiles; the order that the profiles are specified in is not important. Save the file to apply the changes. 3.11.5. Configuring the descheduler interval You can configure the amount of time between descheduler runs. The default is 3600 seconds (one hour). Prerequisites Cluster administrator privileges Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Update the deschedulingIntervalSeconds field to the desired value: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1 ... 1 Set the number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits. Save the file to apply the changes. 3.11.6. Uninstalling the descheduler You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Delete the descheduler instance. From the Operators Installed Operators page, click Kube Descheduler Operator . Select the Kube Descheduler tab. Click the Options menu to the cluster entry and select Delete KubeDescheduler . In the confirmation dialog, click Delete . Uninstall the Kube Descheduler Operator. Navigate to Operators Installed Operators , Click the Options menu to the Kube Descheduler Operator entry and select Uninstall Operator . In the confirmation dialog, click Uninstall . Delete the openshift-kube-descheduler-operator namespace. Navigate to Administration Namespaces . Enter openshift-kube-descheduler-operator into the filter box. Click the Options menu to the openshift-kube-descheduler-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete . Delete the KubeDescheduler CRD. Navigate to Administration Custom Resource Definitions . Enter KubeDescheduler into the filter box. Click the Options menu to the KubeDescheduler entry and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete .
[ "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: 2019-05-20T15:39:01Z generation: 1 name: cluster resourceVersion: \"1491\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: 6435dd99-7b15-11e9-bd48-0aec821b8e34 spec: policy: 1 name: scheduler-policy defaultNodeSelector: type=user-node,region=east 2", "apiVersion: v1 data: policy.cfg: | { \"kind\" : \"Policy\", \"apiVersion\" : \"v1\", \"predicates\" : [ {\"name\" : \"MaxGCEPDVolumeCount\"}, {\"name\" : \"GeneralPredicates\"}, 1 {\"name\" : \"MaxAzureDiskVolumeCount\"}, {\"name\" : \"MaxCSIVolumeCountPred\"}, {\"name\" : \"CheckVolumeBinding\"}, {\"name\" : \"MaxEBSVolumeCount\"}, {\"name\" : \"MatchInterPodAffinity\"}, {\"name\" : \"CheckNodeUnschedulable\"}, {\"name\" : \"NoDiskConflict\"}, {\"name\" : \"NoVolumeZoneConflict\"}, {\"name\" : \"PodToleratesNodeTaints\"} ], \"priorities\" : [ {\"name\" : \"LeastRequestedPriority\", \"weight\" : 1}, {\"name\" : \"BalancedResourceAllocation\", \"weight\" : 1}, {\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1}, {\"name\" : \"NodePreferAvoidPodsPriority\", \"weight\" : 1}, {\"name\" : \"NodeAffinityPriority\", \"weight\" : 1}, {\"name\" : \"TaintTolerationPriority\", \"weight\" : 1}, {\"name\" : \"ImageLocalityPriority\", \"weight\" : 1}, {\"name\" : \"SelectorSpreadPriority\", \"weight\" : 1}, {\"name\" : \"InterPodAffinityPriority\", \"weight\" : 1}, {\"name\" : \"EqualPriority\", \"weight\" : 1} ] } kind: ConfigMap metadata: creationTimestamp: \"2019-09-17T08:42:33Z\" name: scheduler-policy namespace: openshift-config resourceVersion: \"59500\" selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy uid: 17ee8865-d927-11e9-b213-02d1e1709840`", "{ \"kind\" : \"Policy\", \"apiVersion\" : \"v1\", \"predicates\" : [ 1 {\"name\" : \"MaxGCEPDVolumeCount\"}, {\"name\" : \"GeneralPredicates\"}, {\"name\" : \"MaxAzureDiskVolumeCount\"}, {\"name\" : \"MaxCSIVolumeCountPred\"}, {\"name\" : \"CheckVolumeBinding\"}, {\"name\" : \"MaxEBSVolumeCount\"}, {\"name\" : \"MatchInterPodAffinity\"}, {\"name\" : \"CheckNodeUnschedulable\"}, {\"name\" : \"NoDiskConflict\"}, {\"name\" : \"NoVolumeZoneConflict\"}, {\"name\" : \"PodToleratesNodeTaints\"} ], \"priorities\" : [ 2 {\"name\" : \"LeastRequestedPriority\", \"weight\" : 1}, {\"name\" : \"BalancedResourceAllocation\", \"weight\" : 1}, {\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1}, {\"name\" : \"NodePreferAvoidPodsPriority\", \"weight\" : 1}, {\"name\" : \"NodeAffinityPriority\", \"weight\" : 1}, {\"name\" : \"TaintTolerationPriority\", \"weight\" : 1}, {\"name\" : \"ImageLocalityPriority\", \"weight\" : 1}, {\"name\" : \"SelectorSpreadPriority\", \"weight\" : 1}, {\"name\" : \"InterPodAffinityPriority\", \"weight\" : 1}, {\"name\" : \"EqualPriority\", \"weight\" : 1} ] }", "oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> 1", "oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy", "configmap/scheduler-policy created", "kind: ConfigMap apiVersion: v1 metadata: name: scheduler-policy namespace: openshift-config data: 1 policy.cfg: | { \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RequireRegion\", \"argument\": { \"labelPreference\": {\"label\": \"region\"}, {\"presence\": true} } } ], \"priorities\": [ { \"name\":\"ZonePreferred\", \"weight\" : 1, \"argument\": { \"labelPreference\": {\"label\": \"zone\"}, {\"presence\": true} } } ] }", "oc patch Scheduler cluster --type='merge' -p '{\"spec\":{\"policy\":{\"name\":\"<configmap-name>\"}}}' --type=merge 1", "oc patch Scheduler cluster --type='merge' -p '{\"spec\":{\"policy\":{\"name\":\"scheduler-policy\"}}}' --type=merge", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: mastersSchedulable: false policy: name: scheduler-policy 1", "oc logs <scheduler-pod> | grep predicates", "oc logs openshift-kube-scheduler-ip-10-0-141-29.ec2.internal | grep predicates", "Creating scheduler with fit predicates 'map[MaxGCEPDVolumeCount:{} MaxAzureDiskVolumeCount:{} CheckNodeUnschedulable:{} NoDiskConflict:{} NoVolumeZoneConflict:{} GeneralPredicates:{} MaxCSIVolumeCountPred:{} CheckVolumeBinding:{} MaxEBSVolumeCount:{} MatchInterPodAffinity:{} PodToleratesNodeTaints:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} ServiceSpreadingPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} EqualPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'", "oc edit configmap <configmap-name> -n openshift-config", "oc edit configmap scheduler-policy -n openshift-config", "apiVersion: v1 data: policy.cfg: | { \"kind\" : \"Policy\", \"apiVersion\" : \"v1\", \"predicates\" : [ 1 {\"name\" : \"MaxGCEPDVolumeCount\"}, {\"name\" : \"GeneralPredicates\"}, {\"name\" : \"MaxAzureDiskVolumeCount\"}, {\"name\" : \"MaxCSIVolumeCountPred\"}, {\"name\" : \"CheckVolumeBinding\"}, {\"name\" : \"MaxEBSVolumeCount\"}, {\"name\" : \"MatchInterPodAffinity\"}, {\"name\" : \"CheckNodeUnschedulable\"}, {\"name\" : \"NoDiskConflict\"}, {\"name\" : \"NoVolumeZoneConflict\"}, {\"name\" : \"PodToleratesNodeTaints\"} ], \"priorities\" : [ 2 {\"name\" : \"LeastRequestedPriority\", \"weight\" : 1}, {\"name\" : \"BalancedResourceAllocation\", \"weight\" : 1}, {\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1}, {\"name\" : \"NodePreferAvoidPodsPriority\", \"weight\" : 1}, {\"name\" : \"NodeAffinityPriority\", \"weight\" : 1}, {\"name\" : \"TaintTolerationPriority\", \"weight\" : 1}, {\"name\" : \"ImageLocalityPriority\", \"weight\" : 1}, {\"name\" : \"SelectorSpreadPriority\", \"weight\" : 1}, {\"name\" : \"InterPodAffinityPriority\", \"weight\" : 1}, {\"name\" : \"EqualPriority\", \"weight\" : 1} ] } kind: ConfigMap metadata: creationTimestamp: \"2019-09-17T17:44:19Z\" name: scheduler-policy namespace: openshift-config resourceVersion: \"15370\" selfLink: /api/v1/namespaces/openshift-config/configmaps/scheduler-policy", "oc delete configmap -n openshift-config <name>", "oc delete configmap -n openshift-config scheduler-policy", "vi policy.cfg", "apiVersion: v1 data: policy.cfg: | { \"kind\" : \"Policy\", \"apiVersion\" : \"v1\", \"predicates\" : [ {\"name\" : \"MaxGCEPDVolumeCount\"}, {\"name\" : \"GeneralPredicates\"}, {\"name\" : \"MaxAzureDiskVolumeCount\"}, {\"name\" : \"MaxCSIVolumeCountPred\"}, {\"name\" : \"CheckVolumeBinding\"}, {\"name\" : \"MaxEBSVolumeCount\"}, {\"name\" : \"MatchInterPodAffinity\"}, {\"name\" : \"CheckNodeUnschedulable\"}, {\"name\" : \"NoDiskConflict\"}, {\"name\" : \"NoVolumeZoneConflict\"}, {\"name\" : \"PodToleratesNodeTaints\"} ], \"priorities\" : [ {\"name\" : \"LeastRequestedPriority\", \"weight\" : 1}, {\"name\" : \"BalancedResourceAllocation\", \"weight\" : 1}, {\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1}, {\"name\" : \"NodePreferAvoidPodsPriority\", \"weight\" : 1}, {\"name\" : \"NodeAffinityPriority\", \"weight\" : 1}, {\"name\" : \"TaintTolerationPriority\", \"weight\" : 1}, {\"name\" : \"ImageLocalityPriority\", \"weight\" : 1}, {\"name\" : \"SelectorSpreadPriority\", \"weight\" : 1}, {\"name\" : \"InterPodAffinityPriority\", \"weight\" : 1}, {\"name\" : \"EqualPriority\", \"weight\" : 1} ] }", "oc create configmap -n openshift-config --from-file=policy.cfg <configmap-name> 1", "oc create configmap -n openshift-config --from-file=policy.cfg scheduler-policy", "configmap/scheduler-policy created", "kind: ConfigMap apiVersion: v1 metadata: name: scheduler-policy namespace: openshift-config data: policy.cfg: | { \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RequireRegion\", \"argument\": { \"labelPreference\": {\"label\": \"region\"}, {\"presence\": true} } } ], \"priorities\": [ { \"name\":\"ZonePreferred\", \"weight\" : 1, \"argument\": { \"labelPreference\": {\"label\": \"zone\"}, {\"presence\": true} } } ] }", "{\"name\" : \"NoVolumeZoneConflict\"}", "{\"name\" : \"MaxEBSVolumeCount\"}", "{\"name\" : \"MaxAzureDiskVolumeCount\"}", "{\"name\" : \"PodToleratesNodeTaints\"}", "{\"name\" : \"CheckNodeUnschedulable\"}", "{\"name\" : \"CheckVolumeBinding\"}", "{\"name\" : \"NoDiskConflict\"}", "{\"name\" : \"MaxGCEPDVolumeCount\"}", "{\"name\" : \"MaxCSIVolumeCountPred\"}", "{\"name\" : \"MatchInterPodAffinity\"}", "{\"name\" : \"CheckNodeCondition\"}", "{\"name\" : \"CheckNodeLabelPresence\"}", "{\"name\" : \"checkServiceAffinity\"}", "{\"name\" : \"PodToleratesNodeNoExecuteTaints\"}", "{\"name\" : \"PodFitsResources\"}", "{\"name\" : \"PodFitsHostPorts\"}", "{\"name\" : \"HostName\"}", "{\"name\" : \"MatchNodeSelector\"}", "{\"name\" : \"NodeAffinityPriority\", \"weight\" : 1}", "{\"name\" : \"TaintTolerationPriority\", \"weight\" : 1}", "{\"name\" : \"ImageLocalityPriority\", \"weight\" : 1}", "{\"name\" : \"SelectorSpreadPriority\", \"weight\" : 1}", "{\"name\" : \"InterPodAffinityPriority\", \"weight\" : 1}", "{\"name\" : \"LeastRequestedPriority\", \"weight\" : 1}", "{\"name\" : \"BalancedResourceAllocation\", \"weight\" : 1}", "{\"name\" : \"NodePreferAvoidPodsPriority\", \"weight\" : 10000}", "{\"name\" : \"EqualPriority\", \"weight\" : 1}", "{\"name\" : \"MostRequestedPriority\", \"weight\" : 1}", "{\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1}", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"priorities\":[ { \"name\":\"<name>\", 1 \"weight\" : 1 2 \"argument\":{ \"serviceAntiAffinity\":{ \"label\": \"<label>\" 3 } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"priorities\": [ { \"name\":\"RackSpread\", \"weight\" : 1, \"argument\": { \"serviceAntiAffinity\": { \"label\": \"rack\" } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"priorities\":[ { \"name\":\"<name>\", 1 \"weight\" : 1 2 \"argument\":{ \"labelPreference\":{ \"label\": \"<label>\", 3 \"presence\": true 4 } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RegionZoneAffinity\", 1 \"argument\": { \"serviceAffinity\": { 2 \"labels\": [\"region, zone\"] 3 } } } ], \"priorities\": [ { \"name\":\"RackSpread\", 4 \"weight\" : 1, \"argument\": { \"serviceAntiAffinity\": { 5 \"label\": \"rack\" 6 } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RegionZoneAffinity\", \"argument\": { \"serviceAffinity\": { \"labels\": [\"region, zone\"] } } } ], \"priorities\": [ { \"name\":\"RackSpread\", \"weight\" : 1, \"argument\": { \"serviceAntiAffinity\": { \"label\": \"rack\" } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"CityAffinity\", \"argument\": { \"serviceAffinity\": { \"label\": \"city\" } } } ], \"priorities\": [ { \"name\":\"BuildingSpread\", \"weight\" : 1, \"argument\": { \"serviceAntiAffinity\": { \"label\": \"building\" } } }, { \"name\":\"RoomSpread\", \"weight\" : 1, \"argument\": { \"serviceAntiAffinity\": { \"label\": \"room\" } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RequireRegion\", \"argument\": { \"labelPreference\": { \"labels\": [\"region\"], \"presence\": true } } } ], \"priorities\": [ { \"name\":\"ZonePreferred\", \"weight\" : 1, \"argument\": { \"labelPreference\": { \"label\": \"zone\", \"presence\": true } } } ] }", "{ \"kind\": \"Policy\", \"apiVersion\": \"v1\", \"predicates\": [ { \"name\": \"RegionAffinity\", \"argument\": { \"serviceAffinity\": { \"labels\": [\"region\"] } } }, { \"name\": \"RequireRegion\", \"argument\": { \"labelsPresence\": { \"labels\": [\"region\"], \"presence\": true } } }, { \"name\": \"BuildingNodesAvoid\", \"argument\": { \"labelsPresence\": { \"label\": \"building\", \"presence\": false } } }, {\"name\" : \"PodFitsPorts\"}, {\"name\" : \"MatchNodeSelector\"} ], \"priorities\": [ { \"name\": \"ZoneSpread\", \"weight\" : 2, \"argument\": { \"serviceAntiAffinity\":{ \"label\": \"zone\" } } }, { \"name\":\"ZonePreferred\", \"weight\" : 1, \"argument\": { \"labelPreference\":{ \"label\": \"zone\", \"presence\": true } } }, {\"name\" : \"ServiceSpreadingPriority\", \"weight\" : 1} ] }", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster resourceVersion: \"601\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: b351d6d0-d06f-4a99-a26b-87af62e79f59 spec: mastersSchedulable: false policy: name: \"\" profile: HighNodeUtilization 1", "apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: failure-domain.beta.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod", "cat team4.yaml apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod", "podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: failure-domain.beta.kubernetes.io/zone", "oc create -f <pod-spec>.yaml", "cat team4.yaml apiVersion: v1 kind: Pod metadata: name: security-s2 labels: security: S2 spec: containers: - name: security-s2 image: docker.io/ocpqe/hello-pod", "podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S2 topologyKey: kubernetes.io/hostname", "oc create -f <pod-spec>.yaml", "cat team4.yaml apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod", "cat pod-team4a.yaml apiVersion: v1 kind: Pod metadata: name: team4a spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod", "cat pod-s1.yaml apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod", "cat pod-s2.yaml apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod", "cat pod-s1.yaml apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod", "cat pod-s2.yaml apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod", "NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod", "oc label node node1 e2e-az-name=e2e-az1", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1", "spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: e2e-az-name operator: In values: - e2e-az1 - e2e-az2", "oc create -f e2e-az2.yaml", "oc label node node1 e2e-az-name=e2e-az3", "spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: e2e-az-name operator: In values: - e2e-az3", "oc create -f e2e-az3.yaml", "oc label node node1 zone=us", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1", "oc label node node1 zone=emea", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us", "oc describe pod pod-s1", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).", "sysctl -a |grep commit", "vm.overcommit_memory = 1", "sysctl -a |grep panic", "vm.panic_on_oom = 0", "spec: taints: - effect: NoExecute key: key1 value: value1 .", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 .", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600", "oc adm taint nodes node1 key1=value1:NoSchedule", "oc adm taint nodes node1 key1=value1:NoExecute", "oc adm taint nodes node1 key2=value2:NoSchedule", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\"", "spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300", "spec: tolerations: - operator: \"Exists\"", "spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2", "spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master", "spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2", "spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600", "oc edit machineset <machineset>", "spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: dedicated value: groupName effect: NoSchedule", "kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]", "oc apply -f project.yaml", "spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: disktype value: ssd effect: PreferNoSchedule", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600", "kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node", "apiVersion: v1 kind: Pod . spec: nodeSelector: 1 region: east type: user-node", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 labels: region: east type: user-node", "apiVersion: v1 kind: Pod spec: nodeSelector: region: east", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\"", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 labels: region: east type: user-node", "apiVersion: v1 kind: Pod metadata: namespace: east-region spec: nodeSelector: region: east type: user-node", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Pod spec: nodeSelector: region: west .", "oc describe pod router-default-66d5cf9464-7pwkc Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress . Controlled By: ReplicaSet/router-default-66d5cf9464", "ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet . spec: template: metadata: spec: metadata: labels: region: east type: user-node .", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.22.1", "kind: ReplicaSet . spec: . template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod . spec: nodeSelector: region: east type: user-node", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: \"\"", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1", "Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector", "oc edit namespace <name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1", "oc label <resource> <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1", "apiVersion: v1 kind: Pod metadata: name: my-pod labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: foo: bar 5 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod", "oc create -f pod-spec.yaml", "kind: Pod apiVersion: v1 metadata: name: my-pod labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod", "kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: foo: bar spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod", "apiVersion: v1 kind: ConfigMap metadata: name: scheduler-config namespace: kube-system 1 data: scheduler-config.yaml: | 2 apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: KubeSchedulerConfiguration profiles: - schedulerName: custom-scheduler 3 leaderElection: leaderElect: false", "oc create -f scheduler-config-map.yaml", "apiVersion: v1 kind: ServiceAccount metadata: name: custom-scheduler namespace: kube-system 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-scheduler-as-kube-scheduler subjects: - kind: ServiceAccount name: custom-scheduler namespace: kube-system 2 roleRef: kind: ClusterRole name: system:kube-scheduler apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-scheduler-as-volume-scheduler subjects: - kind: ServiceAccount name: custom-scheduler namespace: kube-system 3 roleRef: kind: ClusterRole name: system:volume-scheduler apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: labels: component: scheduler tier: control-plane name: custom-scheduler namespace: kube-system 4 spec: selector: matchLabels: component: scheduler tier: control-plane replicas: 1 template: metadata: labels: component: scheduler tier: control-plane version: second spec: serviceAccountName: custom-scheduler containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/config/scheduler-config.yaml 5 image: \"<namespace>/<image_name>:<tag>\" 6 livenessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS initialDelaySeconds: 15 name: kube-second-scheduler readinessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS resources: requests: cpu: '0.1' securityContext: privileged: false volumeMounts: - name: config-volume mountPath: /etc/config hostNetwork: false hostPID: false volumes: - name: config-volume configMap: name: scheduler-config", "oc create -f custom-scheduler.yaml", "oc get pods -n kube-system", "NAME READY STATUS RESTARTS AGE custom-scheduler-6cd7c4b8bc-854zb 1/1 Running 0 2m", "oc edit clusterrole system:kube-scheduler", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: \"2021-07-07T10:19:14Z\" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-scheduler resourceVersion: \"125\" uid: 53896c70-b332-420a-b2a4-f72c822313f2 rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - apiGroups: - coordination.k8s.io resourceNames: - kube-scheduler - custom-scheduler 1 resources: - leases verbs: - get - update - apiGroups: - \"\" resources: - endpoints verbs: - create - apiGroups: - \"\" resourceNames: - kube-scheduler - custom-scheduler 2 resources: - endpoints verbs: - get - update", "apiVersion: v1 kind: Pod metadata: name: custom-scheduler-example labels: name: custom-scheduler-example spec: schedulerName: custom-scheduler 1 containers: - name: pod-with-second-annotation-container image: docker.io/ocpqe/hello-pod", "oc create -f custom-scheduler-example.yaml", "oc get pod custom-scheduler-example", "NAME READY STATUS RESTARTS AGE custom-scheduler-example 1/1 Running 0 4m", "oc describe pod custom-scheduler-example", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> custom-scheduler Successfully assigned default/custom-scheduler-example to <node_name>", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal profileCustomizations: podLifetime: 48h 1 profiles: 2 - AffinityAndTaints - TopologyAndDuplicates 3 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/nodes/controlling-pod-placement-onto-nodes-scheduling
Chapter 16. Desktop
Chapter 16. Desktop 16.1. GNOME 3 Red Hat Enterprise Linux 7 features the major version of the GNOME Desktop, GNOME 3. The user experience of GNOME 3 is largely defined by GNOME Shell, which replaces the GNOME 2 desktop shell. Apart from window management, GNOME Shell provides the top bar on the screen, which hosts the "system status" area in the top right, a clock, and a hot corner that switches to Activities Overview , which provides easy access to applications and windows. The default GNOME Shell interface in Red Hat Enterprise Linux 7 is GNOME Classic which features a window list at the bottom of the screen and traditional Applications and Places menus. For more information about GNOME 3, consult GNOME help. To access it, press the Super ( Windows ) key to enter the Activities Overview , type help , and then press Enter . For more information about GNOME 3 Desktop deployment, configuration and administration, see the Desktop Migration and Administration Guide . GTK+ 3 GNOME 3 uses the GTK+ 3 library which can be installed in parallel with GTK+ 2. Both GTK+ and GTK+ 3 are available in Red Hat Enterprise Linux 7. Existing GTK+ 2 applications will continue to work in GNOME 3. GNOME Boxes Red Hat Enterprise Linux 7 introduces a lightweight graphical desktop virtualization tool used to view and access virtual machines and remote systems. GNOME Boxes provides a way to test different operating systems and applications from the desktop with minimal configuration.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-desktop
Chapter 48. JAXB
Chapter 48. JAXB Since Camel 1.0 JAXB is a Data Format which uses the JAXB XML marshalling standard to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. 48.1. Dependencies When using jaxb with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency> 48.2. Options The JAXB dataformat supports 20 options, which are listed below. Name Default Java Type Description contextPath String Required Package name where your JAXB classes are located. contextPathIsClassName false Boolean This can be set to true to mark that the contextPath is referring to a classname and not a package name. schema String To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the ',' character. schemaSeverityLevel 0 Enum Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. Enum values: 0 1 2 prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. objectFactory false Boolean Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. ignoreJAXBElement false Boolean Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. mustBeJAXBElement false Boolean Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. filterNonXmlChars false Boolean To ignore non xml characheters and replace them with an empty space. encoding String To overrule and use a specific encoding. fragment false Boolean To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. partClass String Name of class used for fragment parsing. See more details at the fragment option. partNamespace String XML namespace to use for fragment parsing. See more details at the fragment option. namespacePrefixRef String When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. xmlStreamWriterWrapper String To use a custom xml stream writer. schemaLocation String To define the location of the schema. noNamespaceSchemaLocation String To define the location of the namespaceless schema. jaxbProviderProperties String Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. contentTypeHeader true Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. accessExternalSchemaProtocols false String Only in use if schema validation has been enabled. Restrict access to the protocols specified for external reference set by the schemaLocation attribute, Import and Include element. Examples of protocols are file, http, jar:file. false or none to deny all access to external references; a specific protocol, such as file, to give permission to only the protocol; the keyword all to grant permission to all protocols. 48.3. Using the Java DSL The following example uses a named DataFormat of jaxb which is configured with a Java package name to initialize the JAXBContext . DataFormat jaxb = new JaxbDataFormat("com.acme.model"); from("activemq:My.Queue"). unmarshal(jaxb). to("mqseries:Another.Queue"); You can use a named reference to a data format which can then be defined in your Registry such as via your Spring XML file. from("activemq:My.Queue"). unmarshal("myJaxbDataType"). to("mqseries:Another.Queue"); 48.4. Using Spring XML The following example shows how to configure the JaxbDataFormat and use it in multiple routes. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="myJaxb" class="org.apache.camel.converter.jaxb.JaxbDataFormat"> <property name="contextPath" value="org.apache.camel.example"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <marshal><custom ref="myJaxb"/></marshal> <to uri="direct:marshalled"/> </route> <route> <from uri="direct:marshalled"/> <unmarshal><custom ref="myJaxb"/></unmarshal> <to uri="mock:result"/> </route> </camelContext> </beans> 48.5. Multiple context paths It is possible to use this data format with more than one context path. You can specify multiple context paths using : as a separator, for example com.mycompany:com.mycompany2 . 48.6. Partial marshalling / unmarshalling JAXB 2 supports marshalling and unmarshalling XML tree fragments. By default JAXB looks for the @XmlRootElement annotation on a given class to operate on whole XML tree. Sometimes the generated code does not have the @XmlRootElement annotation and sometimes you need to unmarshall only part of the tree. In that case you can use partial unmarshalling. To enable this behaviour you need set property partClass on the JaxbDataFormat . Camel will pass this class to the JAXB unmarshaller. If JaxbConstants.JAXB_PART_CLASS is set as one of the exchange headers, its value is used to override the partClass property on the JaxbDataFormat . For marshalling you have to add the partNamespace attribute with the QName of the destination namespace. If JaxbConstants.JAXB_PART_NAMESPACE is set as one of the exchange headers, its value is used to override the partNamespace property on the JaxbDataFormat . While setting partNamespace through JaxbConstants.JAXB_PART_NAMESPACE , please note that you need to specify its value in the format {namespaceUri}localPart , as per the example below. .setHeader(JaxbConstants.JAXB_PART_NAMESPACE, constant("{http://www.camel.apache.org/jaxb/example/address/1}address")); 48.7. Fragment JaxbDataFormat has a property named fragment which can set the Marshaller.JAXB_FRAGMENT property on the JAXB Marshaller. If you don't want the JAXB Marshaller to generate the XML declaration, you can set this option to be true . The default value of this property is false . 48.8. Ignoring Non-XML Characters JaxbDataFormat supports ignoring Non-XML Characters . Set the filterNonXmlChars property to true . The JaxbDataFormat will replace any non-XML character with a space character ( " " ) during message marshalling or unmarshalling. You can also set the Exchange property Exchange.FILTER_NON_XML_CHARS . JDK 1.5 JDK 1.6+ Filtering in use StAX API and implementation No Filtering not in use StAX API only No This feature has been tested with Woodstox 3.2.9 and Sun JDK 1.6 StAX implementation. JaxbDataFormat now allows you to customize the XMLStreamWriter used to marshal the stream to XML. Using this configuration, you can add your own stream writer to completely remove, escape, or replace non-XML characters. JaxbDataFormat customWriterFormat = new JaxbDataFormat("org.apache.camel.foo.bar"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter()); The following example shows using the Spring DSL and also enabling Camel's non-XML filtering: <bean id="testXmlStreamWriterWrapper" class="org.apache.camel.jaxb.TestXmlStreamWriter"/> <jaxb filterNonXmlChars="true" contextPath="org.apache.camel.foo.bar" xmlStreamWriterWrapper="#testXmlStreamWriterWrapper" /> 48.9. Working with the ObjectFactory If you use XJC to create the java class from the schema, you will get an ObjectFactory for your JAXB context. Since the ObjectFactory uses the JAXBElement to hold the reference of the schema and element instance value, JaxbDataformat will ignore the JAXBElement by default and you will get the element instance value instead of the JAXBElement object from the unmarshaled message body. If you want to get the JAXBElement object form the unmarshaled message body, you need to set the JaxbDataFormat ignoreJAXBElement property to be false . 48.10. Setting the encoding You can set the encoding option on the JaxbDataFormat to configure the Marshaller.JAXB_ENCODING encoding property on the JAXB Marshaller. You can setup which encoding to use when you declare the JaxbDataFormat . You can also provide the encoding in the Exchange property Exchange.CHARSET_NAME . This property will override the encoding set on the JaxbDataFormat . 48.11. Controlling namespace prefix mapping When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. For example, in Spring XML we can define a Map with the mapping. In the mapping file below, we map SOAP to use soap as as a prefix. While our custom namespace http://www.mycompany.com/foo/2 is not using any prefix. <util:map id="myMap"> <entry key="http://www.w3.org/2003/05/soap-envelope" value="soap"/> <!-- we don't want any prefix for our namespace --> <entry key="http://www.mycompany.com/foo/2" value=""/> </util:map> To use this in JAXB or SOAP data formats you refer to this map, using the namespacePrefixRef attribute as shown below. Then Camel will lookup in the Registry a java.util.Map with the id myMap , which was what we defined above. <marshal> <soap version="1.2" contextPath="com.mycompany.foo" namespacePrefixRef="myMap"/> </marshal> 48.12. Schema validation The JaxbDataFormat supports validation by marshalling and unmarshalling from / to XML. You can use the prefix classpath: , file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the , character. Note If the XSD schema files import/access other files, then you need to enable file protocol (or others to allow access). Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema("classpath:person.xsd,classpath:address.xsd"); jaxbDataFormat.setAccessExternalSchemaProtocols("file"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schema="classpath:person.xsd,classpath:address.xsd" accessExternalSchemaProtocols="file"/> </marshal> 48.13. Schema Location The JaxbDataFormat supports to specify the SchemaLocation when marshalling the XML. Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation("schema/person.xsd"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schemaLocation="schema/person.xsd"/> </marshal> 48.14. Marshal data that is already XML The JAXB marshaller requires that the message body is JAXB compatible, e.g it is a JAXBElement , a java instance that has JAXB annotations, or extends JAXBElement . There can be situations where the message body is already in XML, e.g from a String type. JaxbDataFormat has an option named mustBeJAXBElement which you can set to false to relax this check and have the JAXB marshaller only attempt marshalling on JAXBElement ( javax.xml.bind.JAXBIntrospector#isElement returns true ). In those situations the marshaller will fallback to marshal the message body as-is. 48.15. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.dataformat.jaxb.access-external-schema-protocols Only in use if schema validation has been enabled. Restrict access to the protocols specified for external reference set by the schemaLocation attribute, Import and Include element. Examples of protocols are file, http, jar:file. false or none to deny all access to external references; a specific protocol, such as file, to give permission to only the protocol; the keyword all to grant permission to all protocols. false String camel.dataformat.jaxb.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.jaxb.context-path Package name where your JAXB classes are located. String camel.dataformat.jaxb.context-path-is-class-name This can be set to true to mark that the contextPath is referring to a classname and not a package name. false Boolean camel.dataformat.jaxb.enabled Whether to enable auto configuration of the jaxb data format. This is enabled by default. Boolean camel.dataformat.jaxb.encoding To overrule and use a specific encoding. String camel.dataformat.jaxb.filter-non-xml-chars To ignore non xml characheters and replace them with an empty space. false Boolean camel.dataformat.jaxb.fragment To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. false Boolean camel.dataformat.jaxb.ignore-j-a-x-b-element Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. false Boolean camel.dataformat.jaxb.jaxb-provider-properties Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. String camel.dataformat.jaxb.must-be-j-a-x-b-element Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. false Boolean camel.dataformat.jaxb.namespace-prefix-ref When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. String camel.dataformat.jaxb.no-namespace-schema-location To define the location of the namespaceless schema. String camel.dataformat.jaxb.object-factory Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. false Boolean camel.dataformat.jaxb.part-class Name of class used for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.part-namespace XML namespace to use for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.jaxb.schema To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the ',' character. String camel.dataformat.jaxb.schema-location To define the location of the schema. String camel.dataformat.jaxb.schema-severity-level Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. 0 Integer camel.dataformat.jaxb.xml-stream-writer-wrapper To use a custom xml stream writer. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency>", "DataFormat jaxb = new JaxbDataFormat(\"com.acme.model\"); from(\"activemq:My.Queue\"). unmarshal(jaxb). to(\"mqseries:Another.Queue\");", "from(\"activemq:My.Queue\"). unmarshal(\"myJaxbDataType\"). to(\"mqseries:Another.Queue\");", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <bean id=\"myJaxb\" class=\"org.apache.camel.converter.jaxb.JaxbDataFormat\"> <property name=\"contextPath\" value=\"org.apache.camel.example\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <marshal><custom ref=\"myJaxb\"/></marshal> <to uri=\"direct:marshalled\"/> </route> <route> <from uri=\"direct:marshalled\"/> <unmarshal><custom ref=\"myJaxb\"/></unmarshal> <to uri=\"mock:result\"/> </route> </camelContext> </beans>", ".setHeader(JaxbConstants.JAXB_PART_NAMESPACE, constant(\"{http://www.camel.apache.org/jaxb/example/address/1}address\"));", "JaxbDataFormat customWriterFormat = new JaxbDataFormat(\"org.apache.camel.foo.bar\"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter());", "<bean id=\"testXmlStreamWriterWrapper\" class=\"org.apache.camel.jaxb.TestXmlStreamWriter\"/> <jaxb filterNonXmlChars=\"true\" contextPath=\"org.apache.camel.foo.bar\" xmlStreamWriterWrapper=\"#testXmlStreamWriterWrapper\" />", "<util:map id=\"myMap\"> <entry key=\"http://www.w3.org/2003/05/soap-envelope\" value=\"soap\"/> <!-- we don't want any prefix for our namespace --> <entry key=\"http://www.mycompany.com/foo/2\" value=\"\"/> </util:map>", "<marshal> <soap version=\"1.2\" contextPath=\"com.mycompany.foo\" namespacePrefixRef=\"myMap\"/> </marshal>", "JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema(\"classpath:person.xsd,classpath:address.xsd\"); jaxbDataFormat.setAccessExternalSchemaProtocols(\"file\");", "<marshal> <jaxb id=\"jaxb\" schema=\"classpath:person.xsd,classpath:address.xsd\" accessExternalSchemaProtocols=\"file\"/> </marshal>", "JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation(\"schema/person.xsd\");", "<marshal> <jaxb id=\"jaxb\" schemaLocation=\"schema/person.xsd\"/> </marshal>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jaxb-dataformat-component-starter
Chapter 7. Exposing the RHACS portal over HTTP
Chapter 7. Exposing the RHACS portal over HTTP Enable an unencrypted HTTP server to expose the RHACS portal through ingress controllers, Layer 7 load balancers, Istio, or other solutions. If you use an ingress controller, Istio, or a Layer 7 load balancer that prefers unencrypted HTTP back ends, you can configure Red Hat Advanced Cluster Security for Kubernetes to expose the RHACS portal over HTTP. Doing this makes the RHACS portal available over a plaintext back end. Important To expose the RHACS portal over HTTP, you must be using an ingress controller, a Layer 7 load balancer, or Istio to encrypt external traffic with HTTPS. It is insecure to expose the RHACS portal directly to external clients by using plain HTTP. You can expose the RHACS portal over HTTP during installation or on an existing deployment. 7.1. Prerequisites To specify an HTTP endpoint you must use an <endpoints_spec> . It is a comma-separated list of single endpoint specifications in the form of <type>@<addr>:<port> , where: type is grpc or http . Using http as type works in most use cases. For advanced use cases, you can either use grpc or omit its value. If you omit the value for type , you can configure two endpoints in your proxy, one for gRPC and the other for HTTP. Both these endpoints point to the same exposed HTTP port on Central. However, most proxies do not support carrying both gRPC and HTTP traffic on the same external port. addr is the IP address to expose Central on. You can omit this, or use localhost or 127.0.0.1 if you need an HTTP endpoint which is only accessible by using port-forwarding. port is the port to expose Central on. The following are several valid <endpoints_spec> values: 8080 http@8080 :8081 grpc@:8081 localhost:8080 http@localhost:8080 http@8080,grpc@8081 8080, grpc@:8081, [email protected]:8082 7.2. Exposing the RHACS portal over HTTP during the installation If you are installing Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI, use the --plaintext-endpoints option with the roxctl central generate interactive command to enable the HTTP server during the installation. Procedure Run the following command to specify an HTTP endpoint during the interactive installation process: USD roxctl central generate interactive \ --plaintext-endpoints=<endpoints_spec> 1 1 Endpoint specifications in the form of <type>@<addr>:<port> . See the Prerequisites section for details. 7.3. Exposing the RHACS portal over HTTP for an existing deployment You can enable the HTTP server on an existing Red Hat Advanced Cluster Security for Kubernetes deployment. Procedure Create a patch and define a ROX_PLAINTEXT_ENDPOINTS environment variable: USD CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 ' 1 Endpoint specifications in the form of <type>@<addr>:<port> . See the Prerequisites section for details. Add the ROX_PLAINTEXT_ENDPOINTS environment variable to the Central deployment: USD oc -n stackrox patch deploy/central -p "USDCENTRAL_PLAINTEXT_PATCH"
[ "roxctl central generate interactive --plaintext-endpoints=<endpoints_spec> 1", "CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 '", "oc -n stackrox patch deploy/central -p \"USDCENTRAL_PLAINTEXT_PATCH\"" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/expose-portal-over-http
13.14. Software Selection
13.14. Software Selection To specify which packages will be installed, select Software Selection at the Installation Summary screen. The package groups are organized into Base Environments . These environments are pre-defined sets of packages with a specific purpose; for example, the Virtualization Host environment contains a set of software packages needed for running virtual machines on the system. Only one software environment can be selected at installation time. For each environment, there are additional packages available in the form of Add-ons . Add-ons are presented in the right part of the screen and the list of them is refreshed when a new environment is selected. You can select multiple add-ons for your installation environment. A horizontal line separates the list of add-ons into two areas: Add-ons listed above the horizontal line are specific to the environment you selected. If you select any add-ons in this part of the list and then select a different environment, your selection will be lost. Add-ons listed below the horizontal line are available for all environments. Selecting a different environment will not impact the selections made in this part of the list. Figure 13.15. Example of a Software Selection for a Server Installation The availability of base environments and add-ons depends on the variant of the installation ISO image which you are using as the installation source. For example, the server variant provides environments designed for servers, while the workstation variant has several choices for deployment as a developer workstation, and so on. The installation program does not show which packages are contained in the available environments. To see which packages are contained in a specific environment or add-on, see the repodata/*-comps- variant . architecture .xml file on the Red Hat Enterprise Linux Installation DVD which you are using as the installation source. This file contains a structure describing available environments (marked by the <environment> tag) and add-ons (the <group> tag). Important The pre-defined environments and add-ons allow you to customize your system, but in a manual installation, there is no way to select individual packages to install. If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software you need. For more details on Minimal install , see the Installing the Minimum Amount of Packages Required section of the Red Hat Enterprise Linux 7 Security Guide. Alternatively, automating the installation with a Kickstart file allows for a much higher degree of control over installed packages. You can specify environments, groups and individual packages in the %packages section of the Kickstart file. See Section 27.3.2, "Package Selection" for instructions on selecting packages to install in a Kickstart file, and Chapter 27, Kickstart Installations for general information about automating the installation with Kickstart. Once you have selected an environment and add-ons to be installed, click Done to return to the Installation Summary screen. 13.14.1. Core Network Services All Red Hat Enterprise Linux installations include the following network services: centralized logging through the rsyslog service email through SMTP (Simple Mail Transfer Protocol) network file sharing through NFS (Network File System) remote access through SSH (Secure SHell) resource advertising through mDNS (multicast DNS) Some automated processes on your Red Hat Enterprise Linux system use the email service to send reports and messages to the system administrator. By default, the email, logging, and printing services do not accept connections from other systems. You can configure your Red Hat Enterprise Linux system after installation to offer email, file sharing, logging, printing, and remote desktop access services. The SSH service is enabled by default. You can also use NFS to access files on other systems without enabling the NFS sharing service.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-package-selection-ppc
Chapter 5. Installing a cluster quickly on Azure
Chapter 5. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.13, you can install a cluster on Microsoft Azure that uses the default configuration options. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file, which contains Microsoft Azure profile information, in the ~/.azure/ directory on your computer, the installer prompts you to specify the following Azure parameter values for your subscription and service principal. azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Important After you enter values for the previously listed parameters, the installation program creates a osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. These actions ensure that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-default
function::stack_unused
function::stack_unused Name function::stack_unused - Returns the amount of kernel stack currently available Synopsis Arguments None Description This function determines how many bytes are currently available in the kernel stack.
[ "stack_unused:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stack-unused
Chapter 9. Using the the web console for managing virtual machines
Chapter 9. Using the the web console for managing virtual machines To manage virtual machines in a graphical interface, you can use the Virtual Machines pane in the the web console . The following sections describe the web console's virtualization management capabilities and provide instructions for using them. 9.1. Overview of virtual machine management using the the web console The the web console is a web-based interface for system administration. With the installation of a web console plug-in, the web console can be used to manage virtual machines (VMs) on the servers to which the web console can connect. It provides a graphical view of VMs on a host system to which the web console can connect, and allows monitoring system resources and adjusting configuration with ease. Using the the web console for VM management, you can do the following: Create and delete VMs Install operating systems on VMs Run and shut down VMs View information about VMs Create and attach disks to VMs Configure virtual CPU settings for VMs Manage virtual network interfaces Interact with VMs using VM consoles 9.2. Setting up the the web console to manage virtual machines Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Prerequisites Ensure that the web console is installed on your machine. Procedure Install the cockpit-machines plug-in. If the installation is successful, Virtual Machines appears in the web console side menu. 9.3. Creating virtual machines and installing guest operating systems using the the web console The following sections provide information on how to use the the web console to create virtual machines (VMs) and install operating systems on VMs. 9.3.1. Creating virtual machines using the the web console To create a VM on the host machine to which the web console is connected, follow the instructions below. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before creating VMs, consider the amount of system resources you need to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs. A locally available operating system (OS) installation source, which can be one of the following: An ISO image of an installation medium A disk image of an existing guest installation Procedure Click Create VM in the Virtual Machines interface of the the web console. The Create New Virtual Machine dialog appears. Enter the basic configuration of the virtual machine you want to create. Connection - The connection to the host to be used by the virtual machine. Name - The name of the virtual machine. Installation Source Type - The type of the installation source: Filesystem, URL Installation Source - The path or URL that points to the installation source. OS Vendor - The vendor of the virtual machine's operating system. Operating System - The virtual machine's operating system. Memory - The amount of memory with which to configure the virtual machine. Storage Size - The amount of storage space with which to configure the virtual machine. Immediately Start VM - Whether or not the virtual machine will start immediately after it is created. Click Create . The virtual machine is created. If the Immediately Start VM checkbox is selected, the VM will immediately start and begin installing the guest operating system. You must install the operating system the first time the virtual machine is run. Additional resources For information on installing an operating system on a virtual machine, see Section 9.3.2, "Installing operating systems using the the web console" . 9.3.2. Installing operating systems using the the web console The first time a virtual machine loads, you must install an operating system on the virtual machine. Prerequisites Before using the the web console to manage virtual machines, you must install the web console virtual machine plug-in. A VM on which to install an operating system. Procedure Click Install . The installation routine of the operating system runs in the virtual machine console. Note If the Immediately Start VM checkbox in the Create New Virtual Machine dialog is checked, the installation routine of the operating system starts automatically when the virtual machine is created. Note If the installation routine fails, the virtual machine must be deleted and recreated. 9.4. Deleting virtual machines using the the web console You can delete a virtual machine and its associated storage files from the host to which the the web console is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure In the Virtual Machines interface of the the web console, click the name of the VM you want to delete. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Delete . A confirmation dialog appears. [Optional] To delete all or some of the storage files associated with the virtual machine, select the checkboxes to the storage files you want to delete. Click Delete . The virtual machine and any selected associated storage files are deleted. 9.5. Powering up and powering down virtual machines using the the web console Using the the web console, you can run , shut down , and restart virtual machines. You can also send a non-maskable interrupt to a virtual machine that is unresponsive. 9.5.1. Powering up virtual machines in the the web console If a VM is in the shut off state, you can start it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to start. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Run . The virtual machine starts. Additional resources For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.2. Powering down virtual machines in the the web console If a virtual machine is in the running state, you can shut it down using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to shut down. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Shut Down . The virtual machine shuts down. Note If the virtual machine does not shut down, click the arrow to the Shut Down button and select Force Shut Down . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.3. Restarting virtual machines using the the web console If a virtual machine is in the running state, you can restart it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to restart. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Restart . The virtual machine shuts down and restarts. Note If the virtual machine does not restart, click the arrow to the Restart button and select Force Restart . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.4. Sending non-maskable interrupts to VMs using the the web console Sending a non-maskable interrupt (NMI) may cause an unresponsive running VM to respond or shut down. For example, you can send the Ctrl + Alt + Del NMI to a VM that is not responsive. Prerequisites Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Procedure Click a row with the name of the virtual machine to which you want to send an NMI. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click the arrow to the Shut Down button and select Send Non-Maskable Interrupt . An NMI is sent to the virtual machine. Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . 9.6. Viewing virtual machine information using the the web console Using the the web console, you can view information about the virtual storage and VMs to which the web console is connected. 9.6.1. Viewing a virtualization overview in the the web console The following describes how to view an overview of the available virtual storage and the VMs to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the available storage and the virtual machines to which the web console is attached. Click Virtual Machines in the web console's side menu. Information about the available storage and the virtual machines to which the web console session is connected appears. The information includes the following: Storage Pools - The number of storage pools that can be accessed by the web console and their state. Networks - The number of networks that can be accessed by the web console and their state. Name - The name of the virtual machine. Connection - The type of libvirt connection, system or session. State - The state of the virtual machine. Additional resources For information on viewing detailed information about the storage pools the web console session can access, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.2. Viewing storage pool information using the the web console The following describes how to view detailed storage pool information about the storage pools that the web console session can access. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view storage pool information: Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. The information includes the following: Name - The name of the storage pool. Size - The size of the storage pool. Connection - The connection used to access the storage pool. State - The state of the storage pool. Click a row with the name of the storage whose information you want to see. The row expands to reveal the Overview pane with following information about the selected storage pool: Path - The path to the storage pool. Persistent - Whether or not the storage pool is persistent. Autostart - Whether or not the storage pool starts automatically. Type - The storage pool type. To view a list of storage volumes created from the storage pool, click Storage Volumes . The Storage Volumes pane appears showing a list of configured storage volumes with their sizes and the amount of space used. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.3. Viewing basic virtual machine information in the the web console The following describes how to view basic information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view basic information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Note If another tab is selected, click Overview . The information includes the following: Memory - The amount of memory assigned to the virtual machine. Emulated Machine - The machine type emulated by the virtual machine. vCPUs - The number of virtual CPUs configured for the virtual machine. Note To see more detailed virtual CPU information and configure the virtual CPUs configured for a virtual machine, see Section 9.7, "Managing virtual CPUs using the the web console" . Boot Order - The boot order configured for the virtual machine. CPU Type - The architecture of the virtual CPUs configured for the virtual machine. Autostart - Whether or not autostart is enabled for the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.4. Viewing virtual machine resource usage in the the web console The following describes how to view resource usage information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the memory and virtual CPU usage of a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Usage . The Usage pane appears with information about the memory and virtual CPU usage of the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.5. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.6. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.7. Managing virtual CPUs using the the web console Using the the web console, you can manage the virtual CPUs configured for the virtual machines to which the web console is connected. You can view information about the virtual machines. You can also configure the virtual CPUs for virtual machines. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to view and configure virtual CPU parameters. The row expands to reveal the Overview pane with basic information about the selected virtual machine, including the number of virtual CPUs, and controls for shutting down and deleting the virtual machine. Click the number of vCPUs in the Overview pane. The vCPU Details dialog appears. Note The warning in the vCPU Details dialog only appears after the virtual CPU settings are changed. Configure the virtual CPUs for the selected virtual machine. vCPU Count - Enter the number of virtual CPUs for the virtual machine. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - Enter the maximum number of virtual CPUs that can be configured for the virtual machine. Sockets - Select the number of sockets to expose to the virtual machine. Cores per socket - Select the number of cores for each socket to expose to the virtual machine. Threads per core - Select the number of threads for each core to expose to the virtual machine. Click Apply . The virtual CPUs for the virtual machine are configured. Note When the virtual machine is running, changes to the virtual CPU settings only take effect after the virtual machine is stopped and restarted. 9.8. Managing virtual machine disks using the the web console Using the the web console, you can manage the disks configured for the virtual machines to which the web console is connected. You can: View information about disks. Create and attach new virtual disks to virtual machines. Attach existing virtual disks to virtual machines. Detach virtual disks from virtual machines. 9.8.1. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.8.2. Adding new disks to virtual machines using the the web console You can add new disks to virtual machines by creating a new disk (storage pool) and attaching it to a virtual machine using the the web console. Note You can only use directory-type storage pools when creating new disks for virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to create and attach a new disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Ensure that the Create New option button is selected. Configure the new disk. Pool - Select the storage pool from which the virtual disk will be created. Target - Select a target for the virtual disk that will be created. Name - Enter a name for the virtual disk that will be created. Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created. Format - Select the format for the virtual disk that will be created. Supported types: qcow2, raw Persistence - Whether or not the virtual disk will be persistent. If checked, the virtual disk is persistent. If not checked, the virtual disk is not persistent. Note Transient disks can only be added to VMs that are running. Click Add . The virtual disk is created and connected to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.3. Attaching existing disks to virtual machines using the the web console The following describes how to attach existing disks to a virtual machine using the the web console. Note You can only attach directory-type storage pools to virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine to which you want to attach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Click the Use Existing option button. The appropriate configuration fields appear in the Add Disk dialog. Configure the disk for the virtual machine. Pool - Select the storage pool from which the virtual disk will be attached. Target - Select a target for the virtual disk that will be attached. Volume - Select the storage volume that will be attached. Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk transient. Click Add The selected virtual disk is attached to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.4. Detaching disks from virtual machines The following describes how to detach disks from virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine from which you want to detach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click to the disk you want to detach from the virtual machine. The virtual disk is detached from the virtual machine. Caution There is no confirmation before detaching the disk from the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . 9.9. Using the the web console for managing virtual machine vNICs Using the the web console, you can manage the virtual network interface cards (vNICs) configured for the virtual machines to which the web console is connected. You can view information about vNICs. You can also connect and disconnect vNICs from virtual machines. 9.9.1. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.9.2. Connecting virtual NICs in the the web console Using the the web console, you can reconnect disconnected virtual network interface cards (NICs) configured for a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to connect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Plug in the row of the virtual NIC you want to connect. The selected virtual NIC connects to the virtual machine. 9.9.3. Disconnecting virtual NICs in the the web console Using the the web console, you can disconnect the virtual network interface cards (NICs) connected to a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to disconnect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Unplug in the row of the virtual NIC you want to disconnect. The selected virtual NIC disconnects from the virtual machine. 9.10. Interacting with virtual machines using the the web console To interact with a VM in the the web console, you need to connect to the VM's console. Using the the web console, you can view the virtual machine's consoles. These include both graphical and serial consoles. To interact with the VM's graphical interface in the the web console, use the graphical console in the the web console . To interact with the VM's graphical interface in a remote viewer, use the graphical console in remote viewers . To interact with the VM's CLI in the the web console, use the serial console in the the web console . 9.10.1. Viewing the virtual machine graphical console in the the web console You can view the graphical console of a selected virtual machine in the the web console. The virtual machine console shows the graphical output of the virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2. Viewing virtual machine consoles in remote viewers using the the web console You can view the virtual machine's consoles in a remote viewer. The connection can be made by the web console or manually. 9.10.2.1. Viewing the graphical console in a remote viewer You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. Note You can launch Virt Viewer from within the the web console. Other VNC and SPICE remote viewers can be launched manually. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Before you can view the graphical console in Virt Viewer, Virt Viewer must be installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Note Some browser extensions and plug-ins do not allow the web console to open Virt Viewer. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. Click Launch Remote Viewer . The graphical console appears in Virt Viewer. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer using a manual connection, see Section 9.10.2.2, "Viewing the graphical console in a remote viewer connecting manually" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2.2. Viewing the graphical console in a remote viewer connecting manually You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. The web interface provides the information necessary to launch any SPICE or VNC viewer to view the virtual machine console. w Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before you can view the graphical console in a remote viewer, ensure that a SPICE or VNC viewer application is installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Procedure You can view the virtual machine graphics console in any SPICE or VNC viewer application. Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. The following Manual Connection information appears on the right side of the pane. Enter the information in the SPICE or VNC viewer. For more information, see the documentation for the SPICE or VNC viewer. Additional Resources For details onviewing the graphical console in a remote viewer using the the web console to make the connection, see Section 9.10.2.1, "Viewing the graphical console in a remote viewer" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.3. Viewing the virtual machine serial console in the the web console You can view the serial console of a selected virtual machine in the the web console. This is useful when the host machine or the VM is not configured with a graphical interface. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose serial console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Serial Console Console Type. The serial console appears in the web interface. You can disconnect and reconnect the serial console from the virtual machine. To disconnect the serial console from the virtual machine, click Disconnect . To reconnect the serial console to the virtual machine, click Reconnect . Additional Resources For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . 9.11. Creating storage pools using the the web console You can create storage pools using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in. If the web console plug-in is not installed, see Section 9.2, "Setting up the the web console to manage virtual machines" for information about installing the web console virtual machine plug-in. Procedure Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. Click Create Storage Pool . The Create Storage Pool dialog appears. Enter the following information in the Create Storage Pool dialog: Connection - The connection to the host to be used by the storage pool. Name - The name of the storage pool. Type - The type of the storage pool: Filesystem Directory, Network File System Target Path - The storage pool path on the host's file system. Startup - Whether or not the storage pool starts when the host boots. Click Create . The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools. Related information For information on viewing information about storage pools using the the web console, see Section 9.6.2, "Viewing storage pool information using the the web console" .
[ "yum info cockpit Installed Packages Name : cockpit [...]", "yum install cockpit-machines" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/using-the-rhel-8-web-console-for-managing-vms_system-management-using-the-RHEL-7-web-console
4.2. Which Log File is Used
4.2. Which Log File is Used In Red Hat Enterprise Linux, the dbus and audit packages are installed by default, unless they are removed from the default package selection. The setroubleshoot-server must be installed using Yum (use the yum install setroubleshoot-server command). If the auditd daemon is running, an SELinux denial message, such as the following, is written to /var/log/audit/audit.log by default: In addition, a message similar to the one below is written to the /var/log/message file: In Red Hat Enterprise Linux 7, setroubleshootd no longer constantly runs as a service. However, it is still used to analyze the AVC messages. Two new programs act as a method to start setroubleshoot when needed: The sedispatch utility runs as a part of the audit subsystem. When an AVC denial message is returned, sedispatch sends a message using dbus . These messages go straight to setroubleshootd if it is already running. If it is not running, sedispatch starts it automatically. The seapplet utility runs in the system toolbar, waiting for dbus messages in setroubleshootd . It launches the notification bubble, allowing the user to review AVC messages. Procedure 4.1. Starting Daemons Automatically To configure the auditd and rsyslog daemons to automatically start at boot, enter the following commands as the root user: To ensure that the daemons are enabled, type the following commands at the shell prompt: Alternatively, use the systemctl status service-name .service command and search for the keyword enabled in the command output, for example: To learn more on how the systemd daemon manages system services, see the Managing System Services chapter in the System Administrator's Guide.
[ "type=AVC msg=audit(1223024155.684:49): avc: denied { getattr } for pid=2000 comm=\"httpd\" path=\"/var/www/html/file1\" dev=dm-0 ino=399185 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=system_u:object_r:samba_share_t:s0 tclass=file", "May 7 18:55:56 localhost setroubleshoot: SELinux is preventing httpd (httpd_t) \"getattr\" to /var/www/html/file1 (samba_share_t). For complete SELinux messages. run sealert -l de7e30d6-5488-466d-a606-92c9f40d316d", "~]# systemctl enable auditd.service", "~]# systemctl enable rsyslog.service", "~]USD systemctl is-enabled auditd enabled", "~]USD systemctl is-enabled rsyslog enabled", "~]USD systemctl status auditd.service | grep enabled auditd.service - Security Auditing Service Loaded: loaded (/usr/lib/systemd/system/auditd.service; enabled )" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-which_log_file_is_used
Release Notes for AMQ Streams 2.1 on RHEL
Release Notes for AMQ Streams 2.1 on RHEL Red Hat AMQ Streams 2.1 Highlights of what's new and what's changed with this release of AMQ Streams on Red Hat Enterprise Linux
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_rhel/index
probe::vm.pagefault.return
probe::vm.pagefault.return Name probe::vm.pagefault.return - Indicates what type of fault occurred Synopsis vm.pagefault.return Values name name of the probe point fault_type returns either 0 (VM_FAULT_OOM) for out of memory faults, 2 (VM_FAULT_MINOR) for minor faults, 3 (VM_FAULT_MAJOR) for major faults, or 1 (VM_FAULT_SIGBUS) if the fault was neither OOM, minor fault, nor major fault.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-pagefault-return
Chapter 3. Eclipse Temurin features
Chapter 3. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 11 release of Eclipse Temurin includes, see OpenJDK 11.0.20 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.20 release: Reduced risk of JVM crash when using GregorianCalendar.computeTime() In OpenJDK 11.0.19, a virtual machine crash could occur when using the GregorianCalendar.computeTime() method ( JDK-8307683 ). Even though an old issue is the root cause of this JVM crash, a recent fix for a rare issue in the C2 compiler ( JDK-8297951 ) significantly increased the probability of the JVM crash. To mitigate risk, the OpenJDK 11.0.20 release excludes the fix for the C2 compiler. Once the root cause of the JVM crash is resolved ( JDK-8307683 ), OpenJDK will reintroduce the fix for the C2 compiler ( JDK-8297951 ). See JDK-8308884 (JDK Bug System) . Additional characters for GB18030-2022 support allowed To support "Implementation Level 1" of the GB18030-2022 standard, OpenJDK must support the use of five additional characters that are beyond the scope of Unicode 10, which OpenJDK 11 is based on. Maintenance Release 2 of the Java SE 11 specification adds support for these additional characters, which OpenJDK 11.0.20 implements. The additional characters are as follows: 0x82359632 U+9FEB 0x82359633 U+9FEC 0x82359634 U+9FED 0x82359635 U+9FEE 0x82359636 U+9FEF See JDK-8301401 (JDK Bug System) . Support for GB18030-2022 The Chinese Electronics Standardization Institute (CESI) recently published GB18030-2022 as an update to the GB18030 standard, synchronizing the character set with Unicode 11.0. The GB18030-2022 standard is now the default GB18030 character set that OpenJDK 11.0.20 uses. However, this updated character set contains incompatible changes compared with GB18030-2000, which releases of OpenJDK 11 used. From OpenJDK 11.0.20 onward, if you want to use the version of the character set, ensure that the new system property jdk.charset.GB18030 is set to 2000 . See JDK-8301119 (JDK Bug System) . Enhanced ZIP performance The OpenJDK 11.0.20 release includes enhanced checks on the ZIP64 fields of .zip files. If these checks cause failures on trusted .zip files, you can disable these checks by setting the new system property jdk.util.zip.disableZip64ExtraFieldValidation to true . JDK bug system reference ID: JDK-8302483. Enhanced validation of JAR signature You can now configure the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file by setting a new system property, jdk.jar.maxSignatureFileSize . By default, the jdk.jar.maxSignatureFileSize property is set to 8000000 bytes (8 MB). JDK bug system reference ID: JDK-8300596. Legal headers for generated files The javadoc tool now supports the inclusion of legal files, which pertain to the licensing of files that the standard doclet generates. You can use the new --legal-notices command-line option to configure this feature. See JDK-8259530 (JDK Bug System) . GTS root certificate authority (CA) certificates added In the OpenJDK 11.0.20 release, the cacerts truststore includes four Google Trust Services (GTS) root certificates: Certificate 1 Name: Google Trust Services LLC Alias name: gtsrootcar1 Distinguished name: CN=GTS Root R1, O=Google Trust Services LLC, C=US Certificate 2 Name: Google Trust Services LLC Alias name: gtsrootcar2 Distinguished name: CN=GTS Root R2, O=Google Trust Services LLC, C=US Certificate 3 Name: Google Trust Services LLC Alias name: gtsrootcar3 Distinguished name: CN=GTS Root R3, O=Google Trust Services LLC, C=US Certificate 4 Name: Google Trust Services LLC Alias name: gtsrootcar4 Distinguished name: CN=GTS Root R4, O=Google Trust Services LLC, C=US See JDK-8307134 (JDK Bug System) . Microsoft Corporation root CA certificates added In the OpenJDK 11.0.20 release, the cacerts truststore includes two Microsoft Corporation root certificates: Certificate 1 Name: Microsoft Corporation Alias name: microsoftecc2017 Distinguished name: CN=Microsoft ECC Root Certificate Authority 2017, O=Microsoft Corporation, C=US Certificate 2 Name: Microsoft Corporation Alias name: microsoftrsa2017 Distinguished name: CN=Microsoft RSA Root Certificate Authority 2017, O=Microsoft Corporation, C=US See JDK-8304760 (JDK Bug System) . TWCA root CA certificate added In the OpenJDK 11.0.20 release, the cacerts truststore includes the Taiwan Certificate Authority (TWCA) root certificate: Name: TWCA Alias name: twcaglobalrootca Distinguished name: CN=TWCA Global Root CA, OU=Root CA, O=TAIWAN-CA, C=TW See JDK-8305975 (JDK Bug System) . Enhanced contents (trusted certificate entries) of macOS KeychainStore Recent changes to the macOS KeychainStore implementation were incomplete and considered certificates within the user domain only. In the OpenJDK 11.0.20 release, the macOS KeychainStore implementation exposes certificates from both the user domain and the administrator domain The macOS KeychainStore implementation also now excludes certificates that include a deny entry in the trust settings. See JDK-8303465 (JDK Bug System) . Revised on 2024-05-09 16:48:18 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.20/openjdk-temurin-features-11-0-20_openjdk
1.2. Comparing Static to Dynamic IP Addressing
1.2. Comparing Static to Dynamic IP Addressing Static IP addressing When a device is assigned a static IP address, the address does not change over time unless changed manually. It is recommended to use static IP addressing if you want: To ensure network address consistency for servers such as DNS , and authentication servers. To use out-of-band management devices that work independently of other network infrastructure. All the configuration tools listed in Section 3.1, "Selecting Network Configuration Methods" allow assigning static IP addresses manually. The nmcli tool is also suitable, described in Section 3.3.8, "Adding and Configuring a Static Ethernet Connection with nmcli" . For more information on automated configuration and management, see the OpenLMI chapter in the Red Hat Enterprise Linux 7 System Administrators Guide . The Red Hat Enterprise Linux 7 Installation Guide documents the use of a Kickstart file which can also be used for automating the assignment of network settings. Dynamic IP addressing When a device is assigned a dynamic IP address, the address changes over time. For this reason, it is recommended for devices that connect to the network occasionally because IP address might be changed after rebooting the machine. Dynamic IP addresses are more flexible, easier to set up and administer. The dynamic host control protocol ( DHCP ) is a traditional method of dynamically assigning network configurations to hosts. See Section 14.1, "Why Use DHCP?" for more information. You can also use the nmcli tool, described in Section 3.3.7, "Adding and Configuring a Dynamic Ethernet Connection with nmcli" . Note There is no strict rule defining when to use static or dynamic IP address. It depends on user's needs, preferences and the network environment. By default, NetworkManager calls the DHCP client, dhclient .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-comparing_static_to_dynamic_ip_addressing
Chapter 6. Deploying the overcloud
Chapter 6. Deploying the overcloud Prerequisites You are using a separate base environment file, or set of files, for all other Ceph settings, for instance, /home/stack/templates/storage-config.yaml . For more information, see Customizing the Storage Service and Sample Environment File: Creating a Ceph Cluster . You have defined the number of nodes you are assigning to each role in the base environment file. For more information, see Assigning Nodes and Flavors to Roles . During undercloud installation, you set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must inject a trust anchor when you deploy the overcloud, as described in Enabling SSL/TLS on Overcloud Public Endpoints . Important Do not enable Instance HA when deploying a RHOSP HCI environment. Contact your Red Hat representative if you want to use Instance HA with hyperconverged RHOSP deployments with Ceph. Procedure Run the following command to deploy your HCI overcloud: Where: Argument Description --templates Creates the overcloud from the default heat template collection: /usr/share/openstack-tripleo-heat-templates/ ). -p /usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml Specifies that the derived parameters workflow should be run during the deployment to calculate how much memory and CPU should be reserved for a hyperconverged deployment. -r /home/stack/templates/roles_data.yaml Specifies the customized roles definition file created in the Preparing the overcloud role for hyperconverged nodes procedure, which includes the ComputeHCI role. -e /home/stack/templates/ports.yaml Adds the environment file created in the Preparing the overcloud role for hyperconverged nodes procedure, which configures the ports for the ComputeHCI role. -e /home/stack/templates/environment-rhel-registration.yaml Adds an environment file that registers overcloud nodes, as described in Registering the overcloud with the rhsm composable service in the Advanced Overcloud Customization guide. -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml Adds the base environment file that deploys a containerized Red Hat Ceph cluster, with all default settings. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph guide. -e /home/stack/templates/storage-config.yaml Adds a custom environment file that defines all other Ceph settings. For a detailed example of this, see Sample Environment File: Creating a Ceph Cluster in the Deploying an Overcloud with Containerized Red Hat Ceph guide. This sample environment file also specifies the flavors to use, and how many nodes to assign per role. For more information on this, see Assigning Nodes and Flavors to Roles in the Deploying an Overcloud with Containerized Red Hat Ceph guide. -e /home/stack/templates/storage-container-config.yaml Reserves CPU and memory for each Ceph OSD storage container, as described in Reserving CPU and memory resources for Ceph . -e /home/stack/templates/network.yaml Adds the environment file created in the Mapping storage management network ports to NICs procedure. -e /home/stack/templates/ceph-backfill-recovery.yaml (Optional) Adds the environment file from Reduce Ceph Backfill and Recovery Operations . -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml (Optional) Adds the environment file for Single-Root Input/Output Virtualization (SR-IOV). -e /home/stack/templates/network-environment.yaml (Optional) Adds the environment file that applies your SR-IOV network preferences. -e <environment file> (Optional) Adds any additional environment files for your planned overcloud deployment. --ntp-server pool.ntp.org Sets our NTP server. Note Currently, SR-IOV is the only Network Function Virtualization (NFV) implementation supported with HCI. For a full list of deployment options, run the following command: For more details on deployment options, see Creating the Overcloud with the CLI Tools in the Director Installation and Usage guide. Tip You can also use an answers file to specify which environment files to include in your deployment. For more information, see Including Environment Files in Overcloud Creation in the Director Installation and Usage guide.
[ "openstack overcloud deploy --templates -p /usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml -r /home/stack/templates/roles_data.yaml -e /home/stack/templates/ports.yaml -e /home/stack/templates/environment-rhel-registration.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/templates/storage-config.yaml -e /home/stack/templates/storage-container-config.yaml -e /home/stack/templates/network.yaml [-e /home/stack/templates/ceph-backfill-recovery.yaml \\ ] [-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml \\] [-e /home/stack/templates/network-environment.yaml \\ ] [-e <additional environment files for your planned overcloud deployment> \\ ] --ntp-server pool.ntp.org", "openstack help overcloud deploy" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/hyperconverged_infrastructure_guide/deploy-hci-overcloud
Monitoring APIs
Monitoring APIs OpenShift Container Platform 4.13 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring_apis/index
Chapter 17. Uninstalling Logging
Chapter 17. Uninstalling Logging You can remove logging from your OpenShift Container Platform cluster by removing installed Operators and related custom resources (CRs). 17.1. Uninstalling the logging You can stop aggregating logs by deleting the Red Hat OpenShift Logging Operator and the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Administration Custom Resource Definitions page, and click ClusterLogging . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and click Delete ClusterLogging . Go to the Administration Custom Resource Definitions page. Click the options menu to ClusterLogging , and select Delete Custom Resource Definition . Warning Deleting the ClusterLogging CR does not remove the persistent volume claims (PVCs). To delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. Releasing or deleting PVCs can delete PVs and cause data loss. If you have created a ClusterLogForwarder CR, click the options menu to ClusterLogForwarder , and then click Delete Custom Resource Definition . Go to the Operators Installed Operators page. Click the options menu to the Red Hat OpenShift Logging Operator, and then click Uninstall Operator . Optional: Delete the openshift-logging project. Warning Deleting the openshift-logging project deletes everything in that namespace, including any persistent volume claims (PVCs). If you want to preserve logging data, do not delete the openshift-logging project. Go to the Home Projects page. Click the options menu to the openshift-logging project, and then click Delete Project . Confirm the deletion by typing openshift-logging in the dialog box, and then click Delete . 17.2. Deleting logging PVCs To keep persistent volume claims (PVCs) for reuse with other pods, keep the labels or PVC names that you need to reclaim the PVCs. If you do not want to keep the PVCs, you can delete them. If you want to recover storage space, you can also delete the persistent volumes (PVs). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Storage Persistent Volume Claims page. Click the options menu to each PVC, and select Delete Persistent Volume Claim . 17.3. Uninstalling Loki Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you have removed references to LokiStack from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click LokiStack . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete LokiStack . Go to the Administration Custom Resource Definitions page. Click the options menu to LokiStack , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the Loki Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 17.4. Uninstalling Elasticsearch Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click Elasticsearch . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete Elasticsearch . Go to the Administration Custom Resource Definitions page. Click the options menu to Elasticsearch , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the OpenShift Elasticsearch Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 17.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift CLI ( oc ) is installed on your workstation. Procedure Ensure the latest version of the subscribed operator (for example, serverless-operator ) is identified in the currentCSV field. USD oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV Example output currentCSV: serverless-operator.v1.28.0 Delete the subscription (for example, serverless-operator ): USD oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless Example output subscription.operators.coreos.com "serverless-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless Example output clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted Additional resources Reclaiming a persistent volume manually
[ "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/cluster-logging-uninstall
Chapter 11. Node metrics dashboard
Chapter 11. Node metrics dashboard The node metrics dashboard is a visual analytics dashboard that helps you identify potential pod scaling issues. 11.1. About the node metrics dashboard The node metrics dashboard enables administrative and support team members to monitor metrics related to pod scaling, including scaling limits used to diagnose and troubleshoot scaling issues. Particularly, you can use the visual analytics displayed through the dashboard to monitor workload distributions across nodes. Insights gained from these analytics help you determine the health of your CRI-O and Kubelet system components as well as identify potential sources of excessive or imbalanced resource consumption and system instability. The dashboard displays visual analytics widgets organized into the following categories: Critical Includes visualizations that can help you identify node issues that could result in system instability and inefficiency Outliers Includes histograms that visualize processes with runtime durations that fall outside of the 95th percentile Average durations Helps you track change in the time that system components take to process operations Number of operations Displays visualizations that help you identify changes in the number of operations being run, which in turn helps you determine the load balance and efficiency of your system 11.2. Accessing the node metrics dashboard You can access the node metrics dashboard from the Administrator perspective. Procedure Expand the Observe menu option and select Dashboards . Under the Dashboard filter, select Node cluster . Note If no data appears in the visualizations under the Critical category, no critical anomalies were detected. The dashboard is working as intended. 11.3. Identify metrics for indicating optimal node resource usage The node metrics dashboard is organized into four categories: Critical , Outliers , Average durations , and Number of Operations . The metrics in the Critical category help you indicate optimal node resource usage. These metrics include: Top 3 containers with the most OOM kills in the last day Failure rate for image pulls in the last hour Nodes with system reserved memory utilization > 80% Nodes with Kubelet system reserved memory utilization > 50% Nodes with CRI-O system reserved memory utilization > 50% Nodes with system reserved CPU utilization > 80% Nodes with Kubelet system reserved CPU utilization > 50% Nodes with CRI-O system reserved CPU utilization > 50% 11.3.1. Top 3 containers with the most OOM kills in the last day The Top 3 containers with the most OOM kills in the last day query fetches details regarding the top three containers that have experienced the most Out-Of-Memory (OOM) kills in the day. Example default query OOM kills force the system to terminate some processes due to low memory. Frequent OOM kills can hinder the functionality of the node and even the entire Kubernetes ecosystem. Containers experiencing frequent OOM kills might be consuming more memory than they should, which causes system instability. Use this metric to identify containers that are experiencing frequent OOM kills and investigate why these containers are consuming an excessive amount of memory. Adjust the resource allocation if necessary and consider resizing the containers based on their memory usage. You can also review the metrics under the Outliers , Average durations , and Number of operations categories to gain further insights into the health and stability of your nodes. 11.3.2. Failure rate for image pulls in the last hour The Failure rate for image pulls in the last hour query divides the total number of failed image pulls by the sum of successful and failed image pulls to provide a ratio of failures. Example default query Understanding the failure rate of image pulls is crucial for maintaining the health of the node. A high failure rate might indicate networking issues, storage problems, misconfigurations, or other issues that could disrupt pod density and the deployment of new containers. If the outcome of this query is high, investigate possible causes such as network connections, the availability of remote repositories, node storage, and the accuracy of image references. You can also review the metrics under the Outliers , Average durations , and Number of operations categories to gain further insights. 11.3.3. Nodes with system reserved memory utilization > 80% The Nodes with system reserved memory utilization > 80% query calculates the percentage of system reserved memory that is utilized for each node. The calculation divides the total resident set size (RSS) by the total memory capacity of the node subtracted from the allocatable memory. RSS is the portion of the system's memory occupied by a process that is held in main memory (RAM). Nodes are flagged if their resulting value equals or exceeds an 80% threshold. Example default query System reserved memory is crucial for a Kubernetes node as it is utilized to run system daemons and Kubernetes system daemons. System reserved memory utilization that exceeds 80% indicates that the system and Kubernetes daemons are consuming too much memory and can suggest node instability that could affect the performance of running pods. Excessive memory consumption can cause Out-of-Memory (OOM) killers that can terminate critical system processes to free up memory. If a node is flagged by this metric, identify which system or Kubernetes processes are consuming excessive memory and take appropriate actions to mitigate the situation. These actions may include scaling back non-critical processes, optimizing program configurations to reduce memory usage, or upgrading node systems to hardware with greater memory capacity. You can also review the metrics under the Outliers , Average durations , and Number of operations categories to gain further insights into node performance. 11.3.4. Nodes with Kubelet system reserved memory utilization > 50% The Nodes with Kubelet system reserved memory utilization > 50% query indicates nodes where the Kubelet's system reserved memory utilization exceeds 50%. The query examines the memory that the Kubelet process itself is consuming on a node. Example default query This query helps you identify any possible memory pressure situations in your nodes that could affect the stability and efficiency of node operations. Kubelet memory utilization that consistently exceeds 50% of the system reserved memory, indicate that the system reserved settings are not configured properly and that there is a high risk of the node becoming unstable. If this metric is highlighted, review your configuration policy and consider adjusting the system reserved settings or the resource limits settings for the Kubelet. Additionally, if your Kubelet memory utilization consistently exceeds half of your total reserved system memory, examine metrics under the Outliers , Average durations , and Number of operations categories to gain further insights for more precise diagnostics. 11.3.5. Nodes with CRI-O system reserved memory utilization > 50% The Nodes with CRI-O system reserved memory utilization > 50% query calculates all nodes where the percentage of used memory reserved for the CRI-O system is greater than or equal to 50%. In this case, memory usage is defined by the resident set size (RSS), which is the portion of the CRI-O system's memory held in RAM. Example default query This query helps you monitor the status of memory reserved for the CRI-O system on each node. High utilization could indicate a lack of available resources and potential performance issues. If the memory reserved for the CRI-O system exceeds the advised limit of 50%, it indicates that half of the system reserved memory is being used by CRI-O on a node. Check memory allocation and usage and assess whether memory resources need to be shifted or increased to prevent possible node instability. You can also examine the metrics under the Outliers , Average durations , and Number of operations categories to gain further insights. 11.3.6. Nodes with System Reserved CPU Utilization > 80% The Nodes with system reserved CPU utilization > 80% query identifies nodes where the system-reserved CPU utilization is more than 80%. The query focuses on the system-reserved capacity to calculate the rate of CPU usage in the last 5 minutes and compares that to the CPU resources available on the nodes. If the ratio exceeds 80%, the node's result is displayed in the metric. Example default query This query indicates a critical level of system-reserved CPU usage, which can lead to resource exhaustion. High system-reserved CPU usage can result in the inability of the system processes (including the Kubelet and CRI-O) to adequately manage resources on the node. This query can indicate excessive system processes or misconfigured CPU allocation. Potential corrective measures include rebalancing workloads to other nodes or increasing the CPU resources allocated to the nodes. Investigate the cause of the high system CPU utilization and review the corresponding metrics in the Outliers , Average durations , and Number of operations categories for additional insights into the node's behavior. 11.3.7. Nodes with Kubelet system reserved CPU utilization > 50% The Nodes with Kubelet system reserved CPU utilization > 50% query calculates the percentage of the CPU that the Kubelet system is currently using from system reserved. Example default query The Kubelet uses the system reserved CPU for its own operations and for running critical system services. For the node's health, it is important to ensure that system reserve CPU usage does not exceed the 50% threshold. Exceeding this limit could indicate heavy utilization or load on the Kubelet, which affects node stability and potentially the performance of the entire Kubernetes cluster. If any node is displayed in this metric, the Kubelet and the system overall are under heavy load. You can reduce overload on a particular node by balancing the load across other nodes in the cluster. Check other query metrics under the Outliers , Average durations , and Number of operations categories to gain further insights and take necessary corrective action. 11.3.8. Nodes with CRI-O system reserved CPU utilization > 50% The Nodes with CRI-O system reserved CPU utilization > 50% query identifies nodes where the CRI-O system reserved CPU utilization has exceeded 50% in the last 5 minutes. The query monitors CPU resource consumption by CRI-O, your container runtime, on a per-node basis. Example default query This query allows for quick identification of abnormal start times that could negatively impact pod performance. If this query returns a high value, your pod start times are slower than usual, which suggests potential issues with the kubelet, pod configuration, or resources. Investigate further by checking your pod configurations and allocated resources. Make sure that they align with your system capabilities. If you still see high start times, explore metrics panels from other categories on the dashboard to determine the state of your system components. 11.4. Customizing dashboard queries You can customize the default queries used to build the node metrics dashboard. Procedure Choose a metric and click Inspect to navigate into the data. This page displays the metric in detail, including an expanded visualization of the results of the query, the Prometheus query used to analyze the data, and the data subset used in the query. Make any required changes to the query parameters. Optional: Click Add query to run additional queries against the data. Click Run query to rerun the query using your specified parameters.
[ "topk(3, sum(increase(container_runtime_crio_containers_oom_count_total[1d])) by (name))", "rate(container_runtime_crio_image_pulls_failure_total[1h]) / (rate(container_runtime_crio_image_pulls_success_total[1h]) + rate(container_runtime_crio_image_pulls_failure_total[1h]))", "sum by (node) (container_memory_rss{id=\"/system.slice\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 80", "sum by (node) (container_memory_rss{id=\"/system.slice/kubelet.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (container_memory_rss{id=\"/system.slice/crio.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 80", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/kubelet.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/crio.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/nodes-dashboard-using
function::ctime
function::ctime Name function::ctime - Convert seconds since epoch into human readable date/time string Synopsis Arguments epochsecs Number of seconds since epoch (as returned by gettimeofday_s ) Description Takes an argument of seconds since the epoch as returned by gettimeofday_s . Returns a string of the form " Wed Jun 30 21:49:08 1993 " The string will always be exactly 24 characters. If the time would be unreasonable far in the past (before what can be represented with a 32 bit offset in seconds from the epoch) an error will occur (which can be avoided with try/catch). If the time would be unreasonable far in the future, an error will also occur. Note that the epoch (zero) corresponds to " Thu Jan 1 00:00:00 1970 " The earliest full date given by ctime, corresponding to epochsecs -2147483648 is " Fri Dec 13 20:45:52 1901 " . The latest full date given by ctime, corresponding to epochsecs 2147483647 is " Tue Jan 19 03:14:07 2038 " . The abbreviations for the days of the week are 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', and 'Sat'. The abbreviations for the months are 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', and 'Dec'. Note that the real C library ctime function puts a newline ('\n') character at the end of the string that this function does not. Also note that since the kernel has no concept of timezones, the returned time is always in GMT.
[ "ctime:string(epochsecs:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ctime
2.3.2. Disabling ACPI Soft-Off with the BIOS
2.3.2. Disabling ACPI Soft-Off with the BIOS The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 2.3.1, "Disabling ACPI Soft-Off with chkconfig Management" ). However, if the preferred method is not effective for your cluster, follow the procedure in this section. Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. You can disable ACPI Soft-Off by configuring the BIOS of each cluster node as follows: Reboot the node and start the BIOS CMOS Setup Utility program. Navigate to the Power menu (or equivalent power management menu). At the Power menu, set the Soft-Off by PWR-BTTN function (or equivalent) to Instant-Off (or the equivalent setting that turns off the node via the power button without delay). Example 2.11, " BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off " shows a Power menu with ACPI Function set to Enabled and Soft-Off by PWR-BTTN set to Instant-Off . Note The equivalents to ACPI Function , Soft-Off by PWR-BTTN , and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off via the power button without delay. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 2.11. BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off This example shows ACPI Function set to Enabled , and Soft-Off by PWR-BTTN set to Instant-Off .
[ "+-------------------------------------------------|------------------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |------------------------| | x Run VGABIOS if S3 Resume Auto | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off] | | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 Disabled | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : 0 | | | POWER ON Function [BUTTON ONLY] | | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | | | | | | | +-------------------------------------------------|------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-bios-setting-ca
Chapter 14. Accessing the CUPS documentation
Chapter 14. Accessing the CUPS documentation CUPS provides browser-based access to the service's documentation that is installed on the CUPS server. This documentation includes: Administration documentation, such as for command-line printer administration and accounting Man pages Programming documentation, such as the administration API References Specifications Prerequisites CUPS is installed and running . The IP address of the client you want to use has permissions to access the web interface. Procedure Use a browser, and access http:// <hostname_or_ip_address> :631/help/ : Expand the entries in Online Help Documents , and select the documentation you want to read.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/accessing-the-cups-documentation_configuring-printing
Chapter 1. Using the Red Hat Quay API
Chapter 1. Using the Red Hat Quay API Red Hat Quay provides a full OAuth 2 , RESTful API that: Is available from endpoints of each Red Hat Quay instance from the URL https://<yourquayhost>/api/v1 Lets you connect to endpoints, via a browser, to get, delete, post, and put Red Hat Quay settings by enabling the Swagger UI Can be accessed by applications that make API calls and use OAuth tokens Sends and receives data as JSON The following text describes how to access the Red Hat Quay API and use it to view and modify setting in your Red Hat Quay cluster. The section lists and describes API endpoints. 1.1. Accessing the Quay API from Quay.io If you don't have your own Red Hat Quay cluster running yet, you can explore the Red Hat Quay API available from Quay.io from your web browser: The API Explorer that appears shows Quay.io API endpoints. You will not see superuser API endpoints or endpoints for Red Hat Quay features that are not enabled on Quay.io (such as Repository Mirroring). From API Explorer, you can get, and sometimes change, information on: Billing, subscriptions, and plans Repository builds and build triggers Error messages and global messages Repository images, manifests, permissions, notifications, vulnerabilities, and image signing Usage logs Organizations, members and OAuth applications User and robot accounts and more... Select to open an endpoint to view the Model Schema for each part of the endpoint. Open an endpoint, enter any required parameters (such as a repository name or image), then select the Try it out! button to query or change settings associated with a Quay.io endpoint. 1.2. Create OAuth access token To create an OAuth access token so you can access the API for your organization: Log in to Red Hat Quay and select your Organization (or create a new one). Select the Applications icon from the left navigation. Select Create New Application and give the new application a name when prompted. Select the new application. Select Generate Token from the left navigation. Select the checkboxes to set the scope of the token and select Generate Access Token. Review the permissions you are allowing and select Authorize Application to approve it. Copy the newly generated token to use to access the API. 1.3. Accessing your Quay API from a web browser By enabling Swagger, you can access the API for your own Red Hat Quay instance through a web browser. This URL exposes the Red Hat Quay API explorer via the Swagger UI and this URL: That way of accessing the API does not include superuser endpoints that are available on Red Hat Quay installations. Here is an example of accessing a Red Hat Quay API interface running on the local system by running the swagger-ui container image: With the swagger-ui container running, open your web browser to localhost port 8888 to view API endpoints via the swagger-ui container. To avoid errors in the log such as "API calls must be invoked with an X-Requested-With header if called from a browser," add the following line to the config.yaml on all nodes in the cluster and restart Red Hat Quay: 1.4. Accessing the Red Hat Quay API from the command line You can use the curl command to GET, PUT, POST, or DELETE settings via the API for your Red Hat Quay cluster. Replace <token> with the OAuth access token you created earlier to get or change settings in the following examples. 1.4.1. Get superuser information For example: USD curl -X GET -H "Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg" http://quay-server:8080/api/v1/superuser/users/ | jq { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "357a20e8c56e69d6f9734d23ef9517e8", "color": "#5254a3", "kind": "user" }, "super_user": true, "enabled": true } ] } 1.4.2. Creating a superuser using the API Configure a superuser name, as described in the Deploy Quay book: Use the configuration editor UI or Edit the config.yaml file directly, with the option of using the configuration API to validate (and download) the updated configuration bundle Create the user account for the superuser name: Obtain an authorization token as detailed above, and use curl to create the user: The returned content includes a generated password for the new user account: { "username": "quaysuper", "email": "[email protected]", "password": "EH67NB3Y6PTBED8H0HC6UVHGGGA3ODSE", "encrypted_password": "fn37AZAUQH0PTsU+vlO9lS0QxPW9A/boXL4ovZjIFtlUPrBz9i4j9UDOqMjuxQ/0HTfy38goKEpG8zYXVeQh3lOFzuOjSvKic2Vq7xdtQsU=" } Now, when you request the list of users , it will show quaysuper as a superuser: USD curl -X GET -H "Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg" http://quay-server:8080/api/v1/superuser/users/ | jq { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "357a20e8c56e69d6f9734d23ef9517e8", "color": "#5254a3", "kind": "user" }, "super_user": true, "enabled": true }, { "kind": "user", "name": "quaysuper", "username": "quaysuper", "email": "[email protected]", "verified": true, "avatar": { "name": "quaysuper", "hash": "c0e0f155afcef68e58a42243b153df08", "color": "#969696", "kind": "user" }, "super_user": true, "enabled": true } ] } 1.4.3. List usage logs An intrnal API, /api/v1/superuser/logs , is available to list the usage logs for the current system. The results are paginated, so in the following example, more than 20 repos were created to show how to use multiple invocations to access the entire result set. 1.4.3.1. Example for pagination First invocation USD curl -X GET -k -H "Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs | jq Initial output { "start_time": "Sun, 12 Dec 2021 11:41:55 -0000", "end_time": "Tue, 14 Dec 2021 11:41:55 -0000", "logs": [ { "kind": "create_repo", "metadata": { "repo": "t21", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:41:16 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, { "kind": "create_repo", "metadata": { "repo": "t20", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:41:05 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, ... { "kind": "create_repo", "metadata": { "repo": "t2", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:25:17 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } } ], "next_page": "gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5" } Second invocation using next_page USD curl -X GET -k -H "Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs?next_page=gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5 | jq Output from second invocation { "start_time": "Sun, 12 Dec 2021 11:42:46 -0000", "end_time": "Tue, 14 Dec 2021 11:42:46 -0000", "logs": [ { "kind": "create_repo", "metadata": { "repo": "t1", "namespace": "namespace1" }, "ip": "10.131.0.13", "datetime": "Mon, 13 Dec 2021 11:25:07 -0000", "performer": { "kind": "user", "name": "user1", "is_robot": false, "avatar": { "name": "user1", "hash": "5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73", "color": "#ad494a", "kind": "user" } }, "namespace": { "kind": "org", "name": "namespace1", "avatar": { "name": "namespace1", "hash": "6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f", "color": "#e377c2", "kind": "org" } } }, ... ] } 1.4.4. Directory synchronization To enable directory synchronization for the team newteam in organization testadminorg , where the corresponding group name in LDAP is ldapgroup : To disable synchronization for the same team: 1.4.5. Create a repository build via API In order to build a repository from the specified input and tag the build with custom tags, users can use requestRepoBuild endpoint. It takes the following data: The archive_url parameter should point to a tar or zip archive that includes the Dockerfile and other required files for the build. The file_id parameter was apart of our older build system. It cannot be used anymore. If Dockerfile is in a sub-directory it needs to be specified as well. The archive should be publicly accessible. OAuth app should have "Administer Organization" scope because only organization admins have access to the robots' account tokens. Otherwise, someone could get robot permissions by simply granting a build access to a robot (without having access themselves), and use it to grab the image contents. In case of errors, check the json block returned and ensure the archive location, pull robot, and other parameters are being passed correctly. Click "Download logs" on the top-right of the individual build's page to check the logs for more verbose messaging. 1.4.6. Create an org robot 1.4.7. Trigger a build Python with requests 1.4.8. Create a private repository 1.4.9. Create a mirrored repository Minimal configuration Extended configuration USD curl -X POST -H "Authorization: Bearer USD{bearer_token}" -H "Content-Type: application/json" --data '{"is_enabled": true, "external_reference": "quay.io/minio/mc", "external_registry_username": "username", "external_registry_password": "password", "external_registry_config": {"unsigned_images":true, "verify_tls": false, "proxy": {"http_proxy": "http://proxy.tld", "https_proxy": "https://proxy.tld", "no_proxy": "domain"}}, "sync_interval": 600, "sync_start_date": "2021-08-06T11:11:39Z", "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": [ "*" ]}, "robot_username": "orga+robot"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq
[ "https://docs.quay.io/api/swagger/", "https://<yourquayhost>/api/v1/discovery.", "export SERVER_HOSTNAME=<yourhostname> sudo podman run -p 8888:8080 -e API_URL=https://USDSERVER_HOSTNAME:8443/api/v1/discovery docker.io/swaggerapi/swagger-ui", "BROWSER_API_CALLS_XHR_ONLY: false", "curl -X GET -H \"Authorization: Bearer <token_here>\" \"https://<yourquayhost>/api/v1/superuser/users/\"", "curl -X GET -H \"Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg\" http://quay-server:8080/api/v1/superuser/users/ | jq { \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"357a20e8c56e69d6f9734d23ef9517e8\", \"color\": \"#5254a3\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -H \"Content-Type: application/json\" -H \"Authorization: Bearer Fava2kV9C92p1eXnMawBZx9vTqVnksvwNm0ckFKZ\" -X POST --data '{ \"username\": \"quaysuper\", \"email\": \"[email protected]\" }' http://quay-server:8080/api/v1/superuser/users/ | jq", "{ \"username\": \"quaysuper\", \"email\": \"[email protected]\", \"password\": \"EH67NB3Y6PTBED8H0HC6UVHGGGA3ODSE\", \"encrypted_password\": \"fn37AZAUQH0PTsU+vlO9lS0QxPW9A/boXL4ovZjIFtlUPrBz9i4j9UDOqMjuxQ/0HTfy38goKEpG8zYXVeQh3lOFzuOjSvKic2Vq7xdtQsU=\" }", "curl -X GET -H \"Authorization: Bearer mFCdgS7SAIoMcnTsHCGx23vcNsTgziAa4CmmHIsg\" http://quay-server:8080/api/v1/superuser/users/ | jq { \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"357a20e8c56e69d6f9734d23ef9517e8\", \"color\": \"#5254a3\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true }, { \"kind\": \"user\", \"name\": \"quaysuper\", \"username\": \"quaysuper\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quaysuper\", \"hash\": \"c0e0f155afcef68e58a42243b153df08\", \"color\": \"#969696\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -X GET -k -H \"Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD\" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs | jq", "{ \"start_time\": \"Sun, 12 Dec 2021 11:41:55 -0000\", \"end_time\": \"Tue, 14 Dec 2021 11:41:55 -0000\", \"logs\": [ { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t21\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:41:16 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t20\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:41:05 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t2\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:25:17 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } } ], \"next_page\": \"gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5\" }", "curl -X GET -k -H \"Authorization: Bearer qz9NZ2Np1f55CSZ3RVOvxjeUdkzYuCp0pKggABCD\" https://example-registry-quay-quay-enterprise.apps.example.com/api/v1/superuser/logs?next_page=gAAAAABhtzGDsH38x7pjWhD8MJq1_2FAgqUw2X9S2LoCLNPH65QJqB4XAU2qAxYb6QqtlcWj9eI6DUiMN_q3e3I0agCvB2VPQ8rY75WeaiUzM3rQlMc4i6ElR78t8oUxVfNp1RMPIRQYYZyXP9h6E8LZZhqTMs0S-SedaQJ3kVFtkxZqJwHVjgt23Ts2DonVoYwtKgI3bCC5 | jq", "{ \"start_time\": \"Sun, 12 Dec 2021 11:42:46 -0000\", \"end_time\": \"Tue, 14 Dec 2021 11:42:46 -0000\", \"logs\": [ { \"kind\": \"create_repo\", \"metadata\": { \"repo\": \"t1\", \"namespace\": \"namespace1\" }, \"ip\": \"10.131.0.13\", \"datetime\": \"Mon, 13 Dec 2021 11:25:07 -0000\", \"performer\": { \"kind\": \"user\", \"name\": \"user1\", \"is_robot\": false, \"avatar\": { \"name\": \"user1\", \"hash\": \"5d40b245471708144de9760f2f18113d75aa2488ec82e12435b9de34a6565f73\", \"color\": \"#ad494a\", \"kind\": \"user\" } }, \"namespace\": { \"kind\": \"org\", \"name\": \"namespace1\", \"avatar\": { \"name\": \"namespace1\", \"hash\": \"6cf18b5c19217bfc6df0e7d788746ff7e8201a68cba333fca0437e42379b984f\", \"color\": \"#e377c2\", \"kind\": \"org\" } } }, ] }", "curl -X POST -H \"Authorization: Bearer 9rJYBR3v3pXcj5XqIA2XX6Thkwk4gld4TCYLLWDF\" -H \"Content-type: application/json\" -d '{\"group_dn\": \"cn=ldapgroup,ou=Users\"}' http://quay1-server:8080/api/v1/organization/testadminorg/team/newteam/syncing", "curl -X DELETE -H \"Authorization: Bearer 9rJYBR3v3pXcj5XqIA2XX6Thkwk4gld4TCYLLWDF\" http://quay1-server:8080/api/v1/organization/testadminorg/team/newteam/syncing", "{ \"docker_tags\": [ \"string\" ], \"pull_robot\": \"string\", \"subdirectory\": \"string\", \"archive_url\": \"string\" }", "curl -X PUT https://quay.io/api/v1/organization/{orgname}/robots/{robot shortname} -H 'Authorization: Bearer <token>''", "curl -X POST https://quay.io/api/v1/repository/YOURORGNAME/YOURREPONAME/build/ -H 'Authorization: Bearer <token>'", "import requests r = requests.post('https://quay.io/api/v1/repository/example/example/image', headers={'content-type': 'application/json', 'Authorization': 'Bearer <redacted>'}, data={[<request-body-contents>}) print(r.text)", "curl -X POST https://quay.io/api/v1/repository -H 'Authorization: Bearer {token}' -H 'Content-Type: application/json' -d '{\"namespace\":\"yournamespace\", \"repository\":\"yourreponame\", \"description\":\"descriptionofyourrepo\", \"visibility\": \"private\"}' | jq", "curl -X POST -H \"Authorization: Bearer USD{bearer_token}\" -H \"Content-Type: application/json\" --data '{\"external_reference\": \"quay.io/minio/mc\", \"external_registry_username\": \"\", \"sync_interval\": 600, \"sync_start_date\": \"2021-08-06T11:11:39Z\", \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [ \"latest\" ]}, \"robot_username\": \"orga+robot\"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq", "curl -X POST -H \"Authorization: Bearer USD{bearer_token}\" -H \"Content-Type: application/json\" --data '{\"is_enabled\": true, \"external_reference\": \"quay.io/minio/mc\", \"external_registry_username\": \"username\", \"external_registry_password\": \"password\", \"external_registry_config\": {\"unsigned_images\":true, \"verify_tls\": false, \"proxy\": {\"http_proxy\": \"http://proxy.tld\", \"https_proxy\": \"https://proxy.tld\", \"no_proxy\": \"domain\"}}, \"sync_interval\": 600, \"sync_start_date\": \"2021-08-06T11:11:39Z\", \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [ \"*\" ]}, \"robot_username\": \"orga+robot\"}' https://USD{quay_registry}/api/v1/repository/USD{orga}/USD{repo}/mirror | jq" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_api_guide/using_the_red_hat_quay_api
Chapter 8. Managing users and roles
Chapter 8. Managing users and roles A User defines a set of details for individuals who use the system. Users can be associated with organizations and environments, so that when they create new entities, the default settings are automatically used. Users can also have one or more roles attached, which grants them rights to view and manage organizations and environments. See Section 8.1, "Managing users" for more information on working with users. You can manage permissions of several users at once by organizing them into user groups. User groups themselves can be further grouped to create a hierarchy of permissions. For more information on creating user groups, see Section 8.4, "Creating and managing user groups" . Roles define a set of permissions and access levels. Each role contains one on more permission filters that specify the actions allowed for the role. Actions are grouped according to the Resource type . Once a role has been created, users and user groups can be associated with that role. This way, you can assign the same set of permissions to large groups of users. Satellite provides a set of predefined roles and also enables creating custom roles and permission filters as described in Section 8.5, "Creating and managing roles" . 8.1. Managing users As an administrator, you can create, modify and remove Satellite users. You can also configure access permissions for a user or a group of users by assigning them different roles . 8.1.1. Creating a user Use this procedure to create a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click Create User . Enter the account details for the new user. Click Submit to create the user. The user account details that you can specify include the following: On the User tab, select an authentication source from the Authorized by list: INTERNAL : to manage the user inside Satellite Server. EXTERNAL : to manage the user with external authentication. For more information, see Configuring authentication for Red Hat Satellite users . On the Organizations tab, select an organization for the user. Specify the default organization Satellite selects for the user after login from the Default on login list. Important If a user is not assigned to an organization, their access is limited. CLI procedure Create a user: The --auth-source-id 1 setting means that the user is authenticated internally, you can specify an external authentication source as an alternative. Add the --admin option to grant administrator privileges to the user. Specifying organization IDs is not required. You can modify the user details later by using the hammer user update command. Additional resources For more information about creating user accounts by using Hammer, enter hammer user create --help . 8.1.2. Assigning roles to a user Use this procedure to assign roles to a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click the username of the user to be assigned one or more roles. Note If a user account is not listed, check that you are currently viewing the correct organization. To list all the users in Satellite, click Default Organization and then Any Organization . Click the Locations tab, and select a location if none is assigned. Click the Organizations tab, and check that an organization is assigned. Click the Roles tab to display the list of available roles. Select the roles to assign from the Roles list. To grant all the available permissions, select the Administrator checkbox. Click Submit . To view the roles assigned to a user, click the Roles tab; the assigned roles are listed under Selected items . To remove an assigned role, click the role name in Selected items . CLI procedure To assign roles to a user, enter the following command: 8.1.3. Impersonating a different user account Administrators can impersonate other authenticated users for testing and troubleshooting purposes by temporarily logging on to the Satellite web UI as a different user. When impersonating another user, the administrator has permissions to access exactly what the impersonated user can access in the system, including the same menus. Audits are created to record the actions that the administrator performs while impersonating another user. However, all actions that an administrator performs while impersonating another user are recorded as having been performed by the impersonated user. Prerequisites Ensure that you are logged on to the Satellite web UI as a user with administrator privileges for Satellite. Procedure In the Satellite web UI, navigate to Administer > Users . To the right of the user that you want to impersonate, from the list in the Actions column, select Impersonate . When you want to stop the impersonation session, in the upper right of the main menu, click the impersonation icon. 8.1.4. Creating an API-only user You can create users that can interact only with the Satellite API. Prerequisites You have created a user and assigned roles to them. Note that this user must be authorized internally. For more information, see the following sections: Section 8.1.1, "Creating a user" Section 8.1.2, "Assigning roles to a user" Procedure Log in to your Satellite as admin. Navigate to Administer > Users and select a user. On the User tab, set a password. Do not save or communicate this password with others. You can create pseudo-random strings on your console: Create a Personal Access Token for the user. For more information, see Section 8.3.1, "Creating a Personal Access Token" . 8.2. Managing SSH keys Adding SSH keys to a user allows deployment of SSH keys during provisioning. For information on deploying SSH keys during provisioning, see Deploying SSH Keys during Provisioning in Provisioning hosts . For information on SSH keys and SSH key creation, see Using SSH-based Authentication in Red Hat Enterprise Linux 8 Configuring basic system settings . 8.2.1. Managing SSH keys for a user Use this procedure to add or remove SSH keys for a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you are logged in to the Satellite web UI as an Admin user of Red Hat Satellite or a user with the create_ssh_key permission enabled for adding SSH key and destroy_ssh_key permission for removing a key. Procedure In the Satellite web UI, navigate to Administer > Users . From the Username column, click on the username of the required user. Click on the SSH Keys tab. To Add SSH key Prepare the content of the public SSH key in a clipboard. Click Add SSH Key . In the Key field, paste the public SSH key content from the clipboard. In the Name field, enter a name for the SSH key. Click Submit . To Remove SSH key Click Delete on the row of the SSH key to be deleted. Click OK in the confirmation prompt. CLI procedure To add an SSH key to a user, you must specify either the path to the public SSH key file, or the content of the public SSH key copied to the clipboard. If you have the public SSH key file, enter the following command: If you have the content of the public SSH key, enter the following command: To delete an SSH key from a user, enter the following command: To view an SSH key attached to a user, enter the following command: To list SSH keys attached to a user, enter the following command: 8.3. Managing Personal Access Tokens Personal Access Tokens allow you to authenticate API requests without using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 8.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 8.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 8.4. Creating and managing user groups 8.4.1. User groups With Satellite, you can assign permissions to groups of users. You can also create user groups as collections of other user groups. If you use an external authentication source, you can map Satellite user groups to external user groups as described in Configuring External User Groups in Installing Satellite Server in a connected network environment . User groups are defined in an organizational context, meaning that you must select an organization before you can access user groups. 8.4.2. Creating a user group Use this procedure to create a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User group . On the User Group tab, specify the name of the new user group and select group members: Select the previously created user groups from the User Groups list. Select users from the Users list. On the Roles tab, select the roles you want to assign to the user group. Alternatively, select the Admin checkbox to assign all available permissions. Click Submit . CLI procedure To create a user group, enter the following command: 8.4.3. Removing a user group Use the following procedure to remove a user group from Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Delete to the right of the user group you want to delete. Click Confirm to delete the user group. 8.5. Creating and managing roles Satellite provides a set of predefined roles with permissions sufficient for standard tasks, as listed in Section 8.6, "Predefined roles available in Satellite" . It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a certain resource type. Certain Satellite plugins create roles automatically. 8.5.1. Creating a role Use this procedure to create a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Create Role . Provide a Name for the role. Click Submit to save your new role. CLI procedure To create a role, enter the following command: To serve its purpose, a role must contain permissions. After creating a role, proceed to Section 8.5.3, "Adding permissions to a role" . 8.5.2. Cloning a role Use the Satellite web UI to clone a role. Procedure In the Satellite web UI, navigate to Administer > Roles and select Clone from the drop-down menu to the right of the required role. Provide a Name for the role. Click Submit to clone the role. Click the name of the cloned role and navigate to Filters . Edit the permissions as required. Click Submit to save your new role. 8.5.3. Adding permissions to a role Use this procedure to add permissions to a role. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Roles . Select Add Filter from the drop-down list to the right of the required role. Select the Resource type from the drop-down list. The (Miscellaneous) group gathers permissions that are not associated with any resource group. Click the permissions you want to select from the Permission list. Depending on the Resource type selected, you can select or deselect the Unlimited and Override checkbox. The Unlimited checkbox is selected by default, which means that the permission is applied on all resources of the selected type. When you disable the Unlimited checkbox, the Search field activates. In this field you can specify further filtering with use of the Satellite search syntax. For more information, see Section 8.7, "Granular permission filtering" . When you enable the Override checkbox, you can add additional locations and organizations to allow the role to access the resource type in the additional locations and organizations; you can also remove an already associated location and organization from the resource type to restrict access. Click . Click Submit to save changes. CLI procedure List all available permissions: Add permissions to a role: For more information about roles and permissions parameters, enter the hammer role --help and hammer filter --help commands. 8.5.4. Viewing permissions of a role Use the Satellite web UI to view the permissions of a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Filters to the right of the required role to get to the Filters page. The Filters page contains a table of permissions assigned to a role grouped by the resource type. It is also possible to generate a complete table of permissions and actions that you can use on your Satellite system. For more information, see Section 8.5.5, "Creating a complete permission table" . 8.5.5. Creating a complete permission table Use the Satellite CLI to create a permission table. Procedure Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table. 8.5.6. Removing a role Use the following procedure to remove a role from Satellite. Procedure In the Satellite web UI, navigate to Administer > Roles . Select Delete from the drop-down list to the right of the role to be deleted. Click Confirm to delete the role. 8.6. Predefined roles available in Satellite The following table provides an overview of permissions that predefined roles in Satellite grant to a user. For a complete set of predefined roles and the permissions they grant, log in to Satellite web UI as the privileged user and navigate to Administer > Roles . For more information, see Section 8.5.4, "Viewing permissions of a role" . Predefined role Permissions the role provides Additional information Auditor View the Audit log. Default role View tasks and jobs invocations. Satellite automatically assigns this role to every user in the system. Manager View and edit global settings. Organization admin All permissions except permissions for managing organizations. An administrator role defined per organization. The role has no visibility into resources in other organizations. By cloning this role and assigning an organization, you can delegate administration of that organization to a user. Site manager View permissions for various items. Permissions to manage hosts in the infrastructure. A restrained version of the Manager role. System admin Edit global settings in Administer > Settings . View, create, edit, and destroy users, user groups, and roles. View, create, edit, destroy, and assign organizations and locations but not view resources within them. Users with this role can create users and assign all roles to them. Give this role only to trusted users. Viewer View the configuration of every element of the Satellite structure, logs, reports, and statistics. 8.7. Granular permission filtering As mentioned in Section 8.5.3, "Adding permissions to a role" , Red Hat Satellite provides the ability to limit the configured user permissions to selected instances of a resource type. These granular filters are queries to the Satellite database and are supported by the majority of resource types. 8.7.1. Creating a granular permission filter Use this procedure to create a granular filter. To use the CLI instead of the Satellite web UI, see the CLI procedure . Satellite does not apply search conditions to create actions. For example, limiting the create_locations action with name = "Default Location" expression in the search field does not prevent the user from assigning a custom name to the newly created location. Procedure Specify a query in the Search field on the Edit Filter page. Deselect the Unlimited checkbox for the field to be active. Queries have the following form: field_name marks the field to be queried. The range of available field names depends on the resource type. For example, the Partition Table resource type offers family , layout , and name as query parameters. operator specifies the type of comparison between field_name and value . See Section 8.7.3, "Supported operators for granular search" for an overview of applicable operators. value is the value used for filtering. This can be for example a name of an organization. Two types of wildcard characters are supported: underscore (_) provides single character replacement, while percent sign (%) replaces zero or more characters. For most resource types, the Search field provides a drop-down list suggesting the available parameters. This list appears after placing the cursor in the search field. For many resource types, you can combine queries using logical operators such as and , not and has operators. CLI procedure To create a granular filter, enter the hammer filter create command with the --search option to limit permission filters, for example: This command adds to the qa-user role a permission to view, create, edit, and destroy content views that only applies to content views with name starting with ccv . 8.7.2. Examples of using granular permission filters As an administrator, you can allow selected users to make changes in a certain part of the environment path. The following filter allows you to work with content while it is in the development stage of the application lifecycle, but the content becomes inaccessible once is pushed to production. 8.7.2.1. Applying permissions for the host resource type The following query applies any permissions specified for the Host resource type only to hosts in the group named host-editors. The following query returns records where the name matches XXXX , Yyyy , or zzzz example strings: You can also limit permissions to a selected environment. To do so, specify the environment name in the Search field, for example: You can limit user permissions to a certain organization or location with the use of the granular permission filter in the Search field. However, some resource types provide a GUI alternative, an Override checkbox that provides the Locations and Organizations tabs. On these tabs, you can select from the list of available organizations and locations. For more information, see Section 8.7.2.2, "Creating an organization-specific manager role" . 8.7.2.2. Creating an organization-specific manager role Use the Satellite web UI to create an administrative role restricted to a single organization named org-1 . Procedure In the Satellite web UI, navigate to Administer > Roles . Clone the existing Organization admin role. Select Clone from the drop-down list to the Filters button. You are then prompted to insert a name for the cloned role, for example org-1 admin . Click the desired locations and organizations to associate them with the role. Click Submit to create the role. Click org-1 admin , and click Filters to view all associated filters. The default filters work for most use cases. However, you can optionally click Edit to change the properties for each filter. For some filters, you can enable the Override option if you want the role to be able to access resources in additional locations and organizations. For example, by selecting the Domain resource type, the Override option, and then additional locations and organizations using the Locations and Organizations tabs, you allow this role to access domains in the additional locations and organizations that is not associated with this role. You can also click New filter to associate new filters with this role. 8.7.3. Supported operators for granular search Table 8.1. Logical operators Operator Description and Combines search criteria. not Negates an expression. has Object must have a specified property. Table 8.2. Symbolic operators Operator Description = Is equal to . An equality comparison that is case-sensitive for text fields. != Is not equal to . An inversion of the = operator. ~ Like . A case-insensitive occurrence search for text fields. !~ Not like . An inversion of the ~ operator. ^ In . An equality comparison that is case-sensitive search for text fields. This generates a different SQL query to the Is equal to comparison, and is more efficient for multiple value comparison. !^ Not in . An inversion of the ^ operator. >, >= Greater than , greater than or equal to . Supported for numerical fields only. <, ⇐ Less than , less than or equal to . Supported for numerical fields only.
[ "hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password", "hammer user add-role --id user_id --role role_name", "openssl rand -hex 32", "hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub", "hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user", "hammer user ssh-keys delete --id key_id --user-id user_id", "hammer user ssh-keys info --id key_id --user-id user_id", "hammer user ssh-keys list --user-id user_id", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{\"satellite_version\":\"6.16.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }", "hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2", "hammer role create --name My_Role_Name", "hammer filter available-permissions", "hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name", "foreman-rake console", "f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)", "<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>", "</table>", "field_name operator value", "hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user", "hostgroup = host-editors", "name ^ (XXXX, Yyyy, zzzz)", "Dev" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/Managing_Users_and_Roles_admin
Chapter 2. Managing secrets securely using Secrets Store CSI driver with GitOps
Chapter 2. Managing secrets securely using Secrets Store CSI driver with GitOps This guide walks you through the process of integrating the Secrets Store Container Storage Interface (SSCSI) driver with the GitOps Operator in OpenShift Container Platform 4.14 and later. 2.1. Overview of managing secrets using Secrets Store CSI driver with GitOps Some applications need sensitive information, such as passwords and usernames which must be concealed as good security practice. If sensitive information is exposed because role-based access control (RBAC) is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Important Anyone who is authorized to create a pod in a namespace can use that RBAC to read any secret in that namespace. With the SSCSI Driver Operator, you can use an external secrets store to store and provide sensitive information to pods securely. The process of integrating the OpenShift Container Platform SSCSI driver with the GitOps Operator consists of the following procedures: Storing AWS Secrets Manager resources in GitOps repository Configuring SSCSI driver to mount secrets from AWS Secrets Manager Configuring GitOps managed resources to use mounted secrets 2.1.1. Benefits Integrating the SSCSI driver with the GitOps Operator provides the following benefits: Enhance the security and efficiency of your GitOps workflows Facilitate the secure attachment of secrets into deployment pods as a volume Ensure that sensitive information is accessed securely and efficiently 2.1.2. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Microsoft Azure Key Vault As an example, consider that you are using AWS Secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in GitOps repository that is ready to use the secrets from AWS Secrets Manager: Example directory structure in GitOps repository 2 Directory that stores the aws-provider.yaml file. 3 Configuration file that installs the AWS Secrets Manager provider and deploys resources for it. 1 Configuration file that creates an application and deploys resources for AWS Secrets Manager. 4 Directory that stores the deployment pod and credential requests. 5 Directory that stores the SecretProviderClass resources to define your secrets store provider. 6 Folder that stores the credentialsrequest.yaml file. This file contains the configuration for the credentials request to mount a secret to the deployment pod. 2.2. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have configured AWS Secrets Manager to store the required secrets. SSCSI Driver Operator is installed on your cluster . Red Hat OpenShift GitOps Operator is installed on your cluster. You have a GitOps repository ready to use the secrets. You are logged in to the Argo CD instance by using the Argo CD admin account. 2.3. Storing AWS Secrets Manager resources in GitOps repository This guide provides instructions with examples to help you use GitOps workflows with the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. Important Using the SSCSI Driver Operator with AWS Secrets Manager is not supported in a hosted control plane cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have configured AWS Secrets Manager to store the required secrets. SSCSI Driver Operator is installed on your cluster . Red Hat OpenShift GitOps Operator is installed on your cluster. You have a GitOps repository ready to use the secrets. You are logged in to the Argo CD instance by using the Argo CD admin account. Procedure Install the AWS Secrets Manager provider and add resources: In your GitOps repository, create a directory and add aws-provider.yaml file in it with the following configuration to deploy resources for the AWS Secrets Manager provider: Important The AWS Secrets Manager provider for the SSCSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Add a secret-provider-app.yaml file in your GitOps repository to create an application and deploy resources for AWS Secrets Manager: Example secret-provider-app.yaml file apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: secret-provider-app namespace: openshift-gitops spec: destination: namespace: openshift-cluster-csi-drivers server: https://kubernetes.default.svc project: default source: path: path/to/aws-provider/resources repoURL: https://github.com/<my-domain>/<gitops>.git 1 syncPolicy: automated: prune: true selfHeal: true 1 Update the value of the repoURL field to point to your GitOps repository. Synchronize resources with the default Argo CD instance to deploy them in the cluster: Add a label to the openshift-cluster-csi-drivers namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops Apply the resources in your GitOps repository to your cluster, including the aws-provider.yaml file you just pushed: Example output application.argoproj.io/argo-app created application.argoproj.io/secret-provider-app created ... In the Argo CD UI, you can observe that the csi-secrets-store-provider-aws daemonset continues to synchronize resources. To resolve this issue, you must configure the SSCSI driver to mount secrets from the AWS Secrets Manager. 2.4. Configuring SSCSI driver to mount secrets from AWS Secrets Manager To store and manage your secrets securely, use GitOps workflows and configure the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. For example, consider that you want to mount a secret to a deployment pod under the dev namespace which is in the /environments/dev/ directory. Prerequisites You have the AWS Secrets Manager resources stored in your GitOps repository. Procedure Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Example output clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "csi-secrets-store-provider-aws" Grant permission to allow the service account to read the AWS secret object: Create a credentialsrequest-dir-aws folder under a namespace-scoped directory in your GitOps repository because the credentials request is namespace-scoped. For example, create a credentialsrequest-dir-aws folder under the dev namespace which is in the /environments/dev/ directory by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request in the /environments/dev/credentialsrequest-dir-aws/ path to mount a secret to the deployment pod in the dev namespace: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "<aws_secret_arn>" 1 secretRef: name: aws-creds namespace: dev 2 serviceAccountNames: - default 2 The namespace for the secret reference. Update the value of this namespace field according to your project deployment setup. 1 The ARN of your secret in the region where your cluster is on. The <aws_region> of <aws_secret_arn> has to match the cluster region. If it does not match, create a replication of your secret in the region where your cluster is on. Tip To find your cluster region, run the command: USD oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}' Example output us-west-2 Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Check the role policy on AWS to confirm the <aws_region> of "Resource" in the role policy matches the cluster region: Example role policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx" } ] } Bind the service account with the role ARN by running the following command: USD oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn="<aws_role_arn>" Example command USD oc annotate -n dev sa/default eks.amazonaws.com/role-arn="<aws_role_arn>" Example output serviceaccount/default annotated Create a namespace-scoped SecretProviderClass resource to define your secrets store provider. For example, you create a SecretProviderClass resource in /environments/dev/apps/app-taxi/services/taxi/base/config directory of your GitOps repository. Create a secret-provider-class-aws.yaml file in the same directory where the target deployment is located in your GitOps repository: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: dev 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" 5 objectType: "secretsmanager" 1 Name of the secret provider class. 2 Namespace for the secret provider class. The namespace must match the namespace of the resource which will use the secret. 3 Name of the secret store provider. 4 Specifies the provider-specific configuration parameters. 5 The secret name you created in AWS. Verify that after pushing this YAML file to your GitOps repository, the namespace-scoped SecretProviderClass resource is populated in the target application page in the Argo CD UI. Note If the Sync Policy of your application is not set to Auto , you can manually sync the SecretProviderClass resource by clicking Sync in the Argo CD UI. 2.5. Configuring GitOps managed resources to use mounted secrets You must configure the GitOps managed resources by adding volume mounts configuration to a deployment and configuring the container pod to use the mounted secret. Prerequisites You have the AWS Secrets Manager resources stored in your GitOps repository. You have the Secrets Store Container Storage Interface (SSCSI) driver configured to mount secrets from AWS Secrets Manager. Procedure Configure the GitOps managed resources. For example, consider that you want to add volume mounts configuration to the deployment of app-taxi application and the 100-deployment.yaml file is in the /environments/dev/apps/app-taxi/services/taxi/base/config/ directory. Add the volume mounting to the deployment YAML file and configure the container pod to use the secret provider class resources and mounted secret: Example YAML file apiVersion: apps/v1 kind: Deployment metadata: name: taxi namespace: dev 1 spec: replicas: 1 template: metadata: # ... spec: containers: - image: nginxinc/nginx-unprivileged:latest imagePullPolicy: Always name: taxi ports: - containerPort: 8080 volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" 2 readOnly: true resources: {} serviceAccountName: default volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 status: {} # ... 1 Namespace for the deployment. This must be the same namespace as the secret provider class. 2 The path to mount secrets in the volume mount. 3 Name of the secret provider class. Push the updated resource YAML file to your GitOps repository. In the Argo CD UI, click REFRESH on the target application page to apply the updated deployment manifest. Verify that all the resources are successfully synchronized on the target application page. Verify that you can you can access the secrets from AWS Secrets manager in the pod volume mount: List the secrets in the pod mount: USD oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/ Example command USD oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/ Example output <secret_name> View a secret in the pod mount: USD oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name> Example command USD oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testSecret Example output <secret_value> 2.6. Additional resources Obtaining the ccoctl tool About the Cloud Credential Operator Determining the Cloud Credential Operator mode Configure your AWS cluster to use AWS STS Configuring AWS Secrets Manager to store the required secrets About the Secrets Store CSI Driver Operator Mounting secrets from an external secrets store to a CSI volume
[ "├── config │ ├── argocd │ │ ├── argo-app.yaml │ │ ├── secret-provider-app.yaml 1 │ │ ├── │ └── sscsid 2 │ └── aws-provider.yaml 3 ├── environments │ ├── dev 4 │ │ ├── apps │ │ │ └── app-taxi 5 │ │ │ ├── │ │ ├── credentialsrequest-dir-aws 6 │ │ └── env │ │ ├── │ ├── new-env │ │ ├──", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: secret-provider-app namespace: openshift-gitops spec: destination: namespace: openshift-cluster-csi-drivers server: https://kubernetes.default.svc project: default source: path: path/to/aws-provider/resources repoURL: https://github.com/<my-domain>/<gitops>.git 1 syncPolicy: automated: prune: true selfHeal: true", "oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops", "application.argoproj.io/argo-app created application.argoproj.io/secret-provider-app created", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: \"csi-secrets-store-provider-aws\"", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"<aws_secret_arn>\" 1 secretRef: name: aws-creds namespace: dev 2 serviceAccountNames: - default", "oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}'", "us-west-2", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\", \"secretsmanager:DescribeSecret\" ], \"Resource\": \"arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx\" } ] }", "oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "oc annotate -n dev sa/default eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "serviceaccount/default annotated", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: dev 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" 5 objectType: \"secretsmanager\"", "apiVersion: apps/v1 kind: Deployment metadata: name: taxi namespace: dev 1 spec: replicas: 1 template: metadata: spec: containers: - image: nginxinc/nginx-unprivileged:latest imagePullPolicy: Always name: taxi ports: - containerPort: 8080 volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" 2 readOnly: true resources: {} serviceAccountName: default volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3 status: {}", "oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/", "oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/", "<secret_name>", "oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name>", "oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testSecret", "<secret_value>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/security/managing-secrets-securely-using-sscsid-with-gitops
Chapter 1. Downloading the Product
Chapter 1. Downloading the Product 1.1. Back Up Your Data Warning Red Hat recommends that you back up your system settings and data before undertaking any of the configuration tasks mentioned in this book.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/ch01
15.3. XML Representation of Additional OVF Data for a Virtual Machine
15.3. XML Representation of Additional OVF Data for a Virtual Machine Use a GET request for a virtual machine with the All-Content: true header to include additional OVF data with the representation of the virtual machine. The Accept header defaults to application/xml if left blank, and the data is represented with HTML entities so as not to interfere with the XML tags. Specifying the Accept: application/json header will return the data in standard XML tagging. This example representation has been formatted from its standard block format to improve legibility. Example 15.2. XML representation of additional ovf data for a virtual machine
[ "GET /ovirt-engine/api/vms/70b4d9a7-4f73-4def-89ca-24fc5f60e01a HTTP/1.1 All-Content: true <?xml version='1.0' encoding='UTF-8'?> <ovf:Envelope xmlns:ovf=\"http://schemas.dmtf.org/ovf/envelope/1/\" xmlns:rasd=\"http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData\" xmlns:vssd=\"http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" ovf:version=\"3.5.0.0\"> <References/> <Section xsi:type=\"ovf:NetworkSection_Type\"> <Info>List of networks</Info> <Network ovf:name=\"Network 1\"/> </Section> <Section xsi:type=\"ovf:DiskSection_Type\"> <Info>List of Virtual Disks</Info> </Section> <Content ovf:id=\"out\" xsi:type=\"ovf:VirtualSystem_Type\"> <CreationDate>2014/12/03 04:25:45</CreationDate> <ExportDate>2015/02/09 14:12:24</ExportDate> <DeleteProtected>false</DeleteProtected> <SsoMethod>guest_agent</SsoMethod> <IsSmartcardEnabled>false</IsSmartcardEnabled> <TimeZone>Etc/GMT</TimeZone> <default_boot_sequence>0</default_boot_sequence> <Generation>1</Generation> <VmType>1</VmType> <MinAllocatedMem>1024</MinAllocatedMem> <IsStateless>false</IsStateless> <IsRunAndPause>false</IsRunAndPause> <AutoStartup>false</AutoStartup> <Priority>1</Priority> <CreatedByUserId>fdfc627c-d875-11e0-90f0-83df133b58cc</CreatedByUserId> <IsBootMenuEnabled>false</IsBootMenuEnabled> <IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled> <IsSpiceCopyPasteEnabled>true</IsSpiceCopyPasteEnabled> <Name>VM_export</Name> <TemplateId>00000000-0000-0000-0000-000000000000</TemplateId> <TemplateName>Blank</TemplateName> <IsInitilized>false</IsInitilized> <Origin>3</Origin> <DefaultDisplayType>1</DefaultDisplayType> <TrustedService>false</TrustedService> <OriginalTemplateId>00000000-0000-0000-0000-000000000000</OriginalTemplateId> <OriginalTemplateName>Blank</OriginalTemplateName> <UseLatestVersion>false</UseLatestVersion> <Section ovf:id=\"70b4d9a7-4f73-4def-89ca-24fc5f60e01a\" ovf:required=\"false\" xsi:type=\"ovf:OperatingSystemSection_Type\"> <Info>Guest Operating System</Info> <Description>other</Description> </Section> <Section xsi:type=\"ovf:VirtualHardwareSection_Type\"> <Info>1 CPU, 1024 Memeory</Info> <System> <vssd:VirtualSystemType>ENGINE 3.5.0.0</vssd:VirtualSystemType> </System> <Item> <rasd:Caption>1 virtual cpu</rasd:Caption> <rasd:Description>Number of virtual CPU</rasd:Description> <rasd:InstanceId>1</rasd:InstanceId> <rasd:ResourceType>3</rasd:ResourceType> <rasd:num_of_sockets>1</rasd:num_of_sockets> <rasd:cpu_per_socket>1</rasd:cpu_per_socket> </Item> <Item> <rasd:Caption>1024 MB of memory</rasd:Caption> <rasd:Description>Memory Size</rasd:Description> <rasd:InstanceId>2</rasd:InstanceId> <rasd:ResourceType>4</rasd:ResourceType> <rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits> <rasd:VirtualQuantity>1024</rasd:VirtualQuantity> </Item> <Item> <rasd:Caption>USB Controller</rasd:Caption> <rasd:InstanceId>3</rasd:InstanceId> <rasd:ResourceType>23</rasd:ResourceType> <rasd:UsbPolicy>DISABLED</rasd:UsbPolicy> </Item> </Section> </Content> </ovf:Envelope>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_additional_ovf_data_for_a_virtual_machine
14.8. Additional Resources
14.8. Additional Resources dhcpd(8) man page - Describes how the DHCP daemon works. dhcpd.conf(5) man page - Explains how to configure the DHCP configuration file; includes some examples. dhcpd.leases(5) man page - Describes a persistent database of leases. dhcp-options(5) man page - Explains the syntax for declaring DHCP options in dhcpd.conf ; includes some examples. dhcrelay(8) man page - Explains the DHCP Relay Agent and its configuration options. /usr/share/doc/dhcp- version / - Contains example files, README files, and release notes for current versions of the DHCP service.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-dhcp-additional-resources
Service Mesh
Service Mesh OpenShift Container Platform 4.17 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: ENABLE_NATIVE_SIDECARS: \"true\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"false\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true", "spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark", "spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus", "spec: techPreview: gatewayAPI: enabled: true", "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1", "kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }", "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"", "apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: techPreview: global: pathNormalization: <option>", "oc create -f <myEnvoyFilterFile>", "apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true", "label namespace istio-system istio-discovery=enabled", "2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller \"gateway.networking.k8s.io/v1alpha2/TCPRoute\" is syncing", "kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1\" | kubectl apply -f -; }", "apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0", "api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"", "spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020", "spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: \"true\"", "error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory", "oc label namespace istio-system maistra.io/ignore-namespace-", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true", "An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.6", "oc project istio-system", "oc get smcp -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6", "oc get smcp -o yaml", "oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml", "oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'", "oc edit smcp.v1.maistra.io <smcp_name>", "oc project istio-system", "oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml", "oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml", "oc new-project istio-system-upgrade", "oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml", "spec: policy: type: Mixer", "spec: telemetry: type: Mixer", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN", "#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check", "spec: tracing: sampling: 100 # 1% type: Jaeger", "spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"", "spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install", "oc rollout restart <deployment>", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system", "oc -n istio-system edit smcp <name> 1", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80", "oc edit deployment -n <namespace> <deploymentName>", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin", "oc -n openshift-operators get subscriptions", "oc -n openshift-operators edit subscription <name> 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/servicemeshoperator.openshift-operators: \"\" name: servicemeshoperator namespace: openshift-operators spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc -n openshift-operators get po -l name=istio-operator -owide", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 tracing: type: None sampling: 10000 addons: kiali: enabled: true name: kiali grafana: enabled: true", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.6.6 66m", "spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc -n istio-system edit smcp <name> 1", "spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc -n istio-system edit smcp <name> 1", "spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc -n istio-system get pods -owide", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic", "oc apply -f <file-name>", "oc get smm default -n my-application", "NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s", "oc describe smmr default -n istio-system", "Name: default Namespace: istio-system Labels: <none> Status: Configured Members: default my-application Members: default my-application", "oc edit smmr default -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic", "oc policy add-role-to-user", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.6 security: dataPlane: mtls: true", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT", "oc create -n <namespace> -f <policy.yaml>", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "oc create -n <namespace> -f <destination-rule.yaml>", "kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]", "oc create -n istio-system -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]", "apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"", "apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]", "oc edit smcp <smcp-name>", "spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----", "kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts", "oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'", "oc -n bookinfo delete pods --all", "pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted", "oc get pods -n bookinfo", "sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca", "oc apply -f cluster-issuer.yaml", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca", "oc apply -n istio-system -f istio-ca.yaml", "helm install istio-csr jetstack/cert-manager-istio-csr -n istio-system -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml", "replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: \"\" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: \"maistra.io/member-of=istio-system\" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: [\"basic\"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc", "oc apply -f mesh.yaml -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep", "oc new-project <namespace>", "oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml", "oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml", "oc exec \"USD(oc get pod -l app=sleep -n <namespace> -o jsonpath={.items..metadata.name})\" -c sleep -n <namespace> -- curl http://httpbin.<namespace>:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"", "200", "oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml", "INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')", "curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w \"%{http_code}\" -s", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n istio-system get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false", "apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"", "oc apply -f sidecar.yaml", "oc get sidecar", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-system 1 spec: selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: 2 app: istio-ingressgateway istio: ingressgateway sidecar.istio.io/inject: \"true\" spec: containers: - name: istio-proxy image: auto serviceAccountName: istio-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: istio-system rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-secret-reader namespace: istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy 3 metadata: name: gatewayingress namespace: istio-system spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress", "oc scale -n istio-system deployment/<new_gateway_deployment> --replicas <new_number_of_replicas>", "oc scale -n istio-system deployment/<old_gateway_deployment> --replicas <new_number_of_replicas>", "oc label service -n istio-system istio-ingressgateway app.kubernetes.io/managed-by-", "oc patch service -n istio-system istio-ingressgateway --type='json' -p='[{\"op\": \"remove\", \"path\": \"/metadata/ownerReferences\"}]'", "oc patch smcp -n istio-system <smcp_name> --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/gateways/ingress/enabled\", \"value\": false}]'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: false", "kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-gateway namespace: istio-system 1 spec: host: www.example.com to: kind: Service name: istio-ingressgateway 2 weight: 100 port: targetPort: http2 wildcardPolicy: None", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc get routes", "NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: \"tempo-sample-distributor.tracing-system.svc.cluster.local:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp]", "oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector", "kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6", "spec: tracing: type: None", "apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100", "apiVersion: kiali.io/v1alpha1 kind: Kiali spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: otel-disable-tls spec: host: \"otel-collector.bookinfo.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: \"*.tracing-system-mtls.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE", "spec: addons: jaeger: name: distr-tracing-production", "spec: tracing: sampling: 100", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: prometheus: auth: type: bearer use_kiali_token: true query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: \"istio-proxy\" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: \"90m\" memory: \"245Mi\" requests: cpu: \"30m\" memory: \"108Mi\" global.oauthproxy: container: resources: requests: cpu: \"101m\" memory: \"256Mi\" limits: cpu: \"201m\" memory: \"512Mi\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "oc get smcp basic -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.6 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local", "spec: cluster: name:", "spec: cluster: network:", "spec: gateways: additionalEgress: <egress_name>:", "spec: gateways: additionalEgress: <egress_name>: enabled:", "spec: gateways: additionalEgress: <egress_name>: requestedNetworkView:", "spec: gateways: additionalEgress: <egress_name>: service: metadata: labels: federation.maistra.io/egress-for:", "spec: gateways: additionalEgress: <egress_name>: service: ports:", "spec: gateways: additionalIngress:", "spec: gateways: additionalIgress: <ingress_name>: enabled:", "spec: gateways: additionalIngress: <ingress_name>: service: type:", "spec: gateways: additionalIngress: <ingress_name>: service: type:", "spec: gateways: additionalIngress: <ingress_name>: service: metadata: labels: federation.maistra.io/ingress-for:", "spec: gateways: additionalIngress: <ingress_name>: service: ports:", "spec: gateways: additionalIngress: <ingress_name>: service: ports: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: gateways: additionalIngress: ingress-green-mesh: enabled: true service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery", "kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local", "spec: security: trust: domain:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project red-mesh-system", "oc edit -n red-mesh-system smcp red-mesh", "oc get smcp -n red-mesh-system", "NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "metadata: name:", "metadata: namespace:", "spec: remote: addresses:", "spec: remote: discoveryPort:", "spec: remote: servicePort:", "spec: gateways: ingress: name:", "spec: gateways: egress: name:", "spec: security: trustDomain:", "spec: security: clientID:", "spec: security: certificateChain: kind: ConfigMap name:", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "oc create -n red-mesh-system -f servicemeshpeer.yaml", "oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml", "status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo", "metadata: name:", "metadata: namespace:", "spec: exportRules: - type:", "spec: exportRules: - type: NameSelector nameSelector: namespace: name:", "spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews", "oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>", "oc create -n red-mesh-system -f export-to-green-mesh.yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc -n red-mesh-system get exportedserviceset green-mesh -o yaml", "status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings", "metadata: name:", "metadata: namespace:", "spec: importRules: - type:", "spec: importRules: - type: NameSelector nameSelector: namespace: name:", "spec: importRules: - type: NameSelector importAsLocal:", "spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project green-mesh-system", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings", "oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>", "oc create -n green-mesh-system -f import-from-red-mesh.yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc -n green-mesh-system get importedserviceset/red-mesh -o yaml", "status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>", "oc edit -n green-mesh-system -f import-from-red-mesh.yaml", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m", "oc create -n <application namespace> -f <DestinationRule.yaml>", "oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "oc apply -f plugin.yaml", "schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100", "oc apply -f <extension>.yaml", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value", "cat <<EOM | oc apply -f - apiVersion: kiali.io/v1alpha1 kind: OSSMConsole metadata: namespace: openshift-operators name: ossmconsole EOM", "delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace>", "for r in USD(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/ */:/g'); do oc delete ossmconsoles -n USD(echo USDr|cut -d: -f1) USD(echo USDr|cut -d: -f2); done", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100", "oc apply -f threescale-wasm-auth-bookinfo.yaml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net", "oc apply -f service-entry-threescale-saas-backend.yml", "oc apply -f destination-rule-threescale-saas-backend.yml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net", "oc apply -f service-entry-threescale-saas-system.yml", "oc apply -f <destination-rule-threescale-saas-system.yml>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300", "apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>", "aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 ,,,", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s", "oc logs -n openshift-operators <podName>", "oc logs -n openshift-operators istio-operator-bb49787db-zgr87", "oc get pods -n istio-system", "NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s", "NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h", "oc describe smcp <smcp-name> -n <controlplane-namespace>", "oc describe smcp basic -n istio-system", "oc get jaeger -n istio-system", "NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m", "oc get kiali -n istio-system", "NAME AGE kiali 15m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc edit smcp <smcp_name>", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true", "logging:", "logging: componentLevels:", "logging: logAsJSON:", "validationMessages:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger", "tracing: sampling:", "tracing: type:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali", "spec: addons: kiali: name:", "kiali: enabled:", "kiali: install:", "kiali: install: dashboard:", "kiali: install: dashboard: viewOnly:", "kiali: install: dashboard: enableGrafana:", "kiali: install: dashboard: enablePrometheus:", "kiali: install: dashboard: enableTracing:", "kiali: install: service:", "kiali: install: service: metadata:", "kiali: install: service: metadata: annotations:", "kiali: install: service: metadata: labels:", "kiali: install: service: ingress:", "kiali: install: service: ingress: metadata: annotations:", "kiali: install: service: ingress: metadata: labels:", "kiali: install: service: ingress: enabled:", "kiali: install: service: ingress: contextPath:", "install: service: ingress: hosts:", "install: service: ingress: tls:", "kiali: install: service: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc login https://<HOSTNAME>:6443", "oc project istio-system", "oc edit -n openshift-distributed-tracing -f jaeger.yaml", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc get pods -n openshift-distributed-tracing", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3", "options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc -n openshift-operators delete ds -lmaistra-version", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni clusterrole/ossm-cni clusterrolebinding/ossm-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete cm -n openshift-operators -lmaistra-version", "oc delete sa -n openshift-operators -lmaistra-version", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n istio-system istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/service_mesh/index
Specialized hardware and driver enablement
Specialized hardware and driver enablement OpenShift Container Platform 4.10 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/specialized_hardware_and_driver_enablement/index
29.2. NVMe over fabrics using FC
29.2. NVMe over fabrics using FC The NVMe over Fibre Channel (FC-NVMe) is fully supported in initiator mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. As a system administrator, complete the tasks in the following sections to deploy the FC-NVMe: Section 29.2.1, "Configuring the NVMe initiator for Broadcom adapters" Section 29.2.2, "Configuring the NVMe initiator for QLogic adapters" 29.2.1. Configuring the NVMe initiator for Broadcom adapters Use this procedure to configure the NVMe initiator for Broadcom adapters client using the NVMe management command line interface ( nvme-cli ) tool. Install the nvme-cli tool: This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. To generate a new hostnqn : Create a /etc/modprobe.d/lpfc.conf file with the following content: Rebuild the initramfs image: Reboot the host system to reconfigure the lpfc driver: Find the WWNN and WWPN of the local and remote ports and use the output to find the subsystem NQN: Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr . Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr . Connect to the NVMe target using the nvme-cli : Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr. Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr. Replace nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 with the subnqn. Verify the NVMe devices are currently connected: Additional resources For more information, see the nvme man page and the NVMe-cli Github repository . 29.2.2. Configuring the NVMe initiator for QLogic adapters Use this procedure to configure NVMe initiator for Qlogic adapters client using the NVMe management command line interface (nvme-cli) tool. Install the nvme-cli tool: This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. To generate a new hostnqn : Remove and reload the qla2xxx module: Find the WWNN and WWPN of the local and remote ports: Using this host-traddr and traddr , find the subsystem NQN: Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr. Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr. Connect to the NVMe target using the nvme-cli tool: Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr. Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr. Replace nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 with the subnqn. Verify the NVMe devices are currently connected: Additional resources For more information, see the nvme man page and the NVMe-cli Github repository .
[ "yum install nvme-cli", "nvme gen-hostnqn", "options lpfc lpfc_enable_fc4_type=3", "dracut --force", "systemctl reboot", "cat /sys/class/scsi_host/host*/nvme_info NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE NVME RPORT WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 000000000e Cmpl 000000000e Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002 abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000000 Err 00000000", "nvme discover --transport fc \\ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \\ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 traddr: nn-0x204600a098cbcac6:pn-0x204700a098cbcac6", "nvme connect --transport fc --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF # lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk", "yum install nvme-cli", "nvme gen-hostnqn", "rmmod qla2xxx # modprobe qla2xxx", "dmesg |grep traddr [ 6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700 [ 6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d", "nvme discover --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 traddr: nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6", "nvme connect --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host_traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF # lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/nvme-over-fabrics-using-fc
Chapter 3. SCAP content in Satellite
Chapter 3. SCAP content in Satellite SCAP content is a SCAP data-stream file that contains implementation of compliance, configuration, or security baselines. A single data stream usually includes multiple XCCDF profiles. An XCCDF profile defines an industry standard or custom security standard against which you can evaluate compliance of host configuration in Satellite, such as Protection Profile for General Purpose Operating Systems (OSPP), Health Insurance Portability and Accountability Act (HIPAA), and PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9. You can adapt existing XCCDF profiles according to your requirements using tailoring files . In Satellite, you use an XCCDF profile from SCAP content and, eventually, a tailoring file, to define a compliance policy . Satellite includes default SCAP contents from SCAP Security Guide provided by the OpenSCAP project . For more information on how to download, deploy, modify, and create your own content, see: Red Hat Enterprise Linux 9 Security hardening Red Hat Enterprise Linux 8 Security hardening Red Hat Enterprise Linux 7 Security Guide Red Hat Enterprise Linux 6 Security Guide 3.1. Supported SCAP versions Satellite supports content of SCAP versions 1.2 and 1.3.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/scap_content_in_satellite_security-compliance
Chapter 4. Alerts
Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel, because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/monitoring_openshift_data_foundation/alerts
Chapter 28. Defining Password Policies
Chapter 28. Defining Password Policies This chapter describes what password policies in Identity Management (IdM) are and how to manage them. 28.1. What Are Password Policies and Why Are They Useful A password policy is a set of rules that passwords must meet. For example, a password policy can define minimum password length and maximum password lifetime. All users affected by such a policy are required to set a sufficiently long password and change it frequently enough. Password policies help reduce the risk of someone discovering and misusing a user's password.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies
Chapter 11. Preparing a RHEL installation on 64-bit IBM Z
Chapter 11. Preparing a RHEL installation on 64-bit IBM Z This section describes how to install Red Hat Enterprise Linux on the 64-bit IBM Z architecture. 11.1. Planning for installation on 64-bit IBM Z Red Hat Enterprise Linux 9 runs on IBM z14 or IBM LinuxONE II systems, or later. The installation process assumes that you are familiar with the 64-bit IBM Z and can set up logical partitions (LPARs) and z/VM guest virtual machines. For installation of Red Hat Enterprise Linux on 64-bit IBM Z, Red Hat supports Direct Access Storage Device (DASD), SCSI disk devices attached over Fiber Channel Protocol (FCP), and virtio-blk and virtio-scsi devices. When using FCP devices, Red Hat recommends using them in multipath configuration for better reliability. Important DASDs are disks that allow a maximum of three partitions per device. For example, dasda can have partitions dasda1 , dasda2 , and dasda3 . Pre-installation decisions Whether the operating system is to be run on an LPAR, KVM, or as a z/VM guest operating system. Network configuration. Red Hat Enterprise Linux 9 for 64-bit IBM Z supports the following network devices: Real and virtual Open Systems Adapter (OSA) Real and virtual HiperSockets LAN channel station (LCS) for real OSA virtio-net devices RDMA over Converged Ethernet (RoCE) Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent RHEL from installing. See the IBM documentation . Note When initializing swap space on a Fixed Block Architecture (FBA) DASD using the SWAPGEN utility, the FBAPART option must be used. Additional resources For additional information about system requirements, see RHEL Technology Capabilities and Limits For additional information about 64-bit IBM Z, see IBM documentation . For additional information about using secure boot with Linux on IBM Z, see Secure boot for Linux on IBM Z . For installation instructions on IBM Power Servers, refer to IBM installation documentation . To see if your system is supported for installing RHEL, refer to https://catalog.redhat.com . 11.2. Boot media compatibility for IBM Z servers The following table provides detailed information about the supported boot media options for installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. It outlines the compatibility of each boot medium with different system types and indicates whether the zipl boot loader is used. This information helps you determine the most suitable boot medium for your specific environment. System type / Boot media Uses zipl boot loader z/VM KVM LPAR z/VM Reader No Yes N/A N/A SE or HMC (remote SFTP, FTPS, FTP server, DVD) No N/A N/A Yes DASD Yes Yes Yes Yes FCP SCSI LUNs Yes Yes Yes Yes FCP SCSI DVD Yes Yes Yes Yes N/A indicates that the boot medium is not applicable for the specified system type. 11.3. Supported environments and components for IBM Z servers The following tables provide information about the supported environments, network devices, machine types, and storage types for different system types when installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. Use these tables to identify the compatibility of various components with your specific system configuration. Table 11.1. Network device compatibility for system types Network device z/VM KVM LPAR Open Systems Adapter (OSA) Yes N/A Yes HiperSockets Yes N/A Yes LAN channel station (LCS) Yes N/A Yes virtio-net N/A Yes N/A RDMA over Converged Ethernet (RoCE) Yes Yes Yes N/A indicates that the component is not applicable for the specified system type. Table 11.2. Machine type compatibility for system types Machine type z/VM KVM LPAR ESA Yes N/A N/A s390-virtio-ccw N/A Yes N/A N/A indicates that the component is not applicable for the specified system type. Table 11.3. Storage type compatibility for system types Storage type z/VM KVM LPAR DASD Yes Yes Yes FCP SCSI Yes Yes [a] Yes virtio-blk N/A Yes N/A [a] Conditional support based on configuration N/A indicates that the component is not applicable for the specified system type. 11.4. Overview of installation process on 64-bit IBM Z servers You can install Red Hat Enterprise Linux on 64-bit IBM Z interactively or in unattended mode. Installation on 64-bit IBM Z differs from other architectures as it is typically performed over a network, and not from local media. The installation consists of three phases: Booting the installation Connect to the mainframe Customize the boot parameters Perform an initial program load (IPL), or boot from the media containing the installation program Connecting to the installation system From a local machine, connect to the remote 64-bit IBM Z system using SSH, and start the installation program using Virtual Network Computing (VNC) Completing the installation using the RHEL installation program 11.5. Boot media for installing RHEL on 64-bit IBM Z servers After establishing a connection with the mainframe, you need to perform an initial program load (IPL), or boot, from the medium containing the installation program. This document describes the most common methods of installing Red Hat Enterprise Linux on 64-bit IBM Z. In general, any method may be used to boot the Linux installation system, which consists of a kernel ( kernel.img ) and initial RAM disk ( initrd.img ) with parameters in the generic.prm file supplemented by user defined parameters. Additionally, a generic.ins file is loaded which determines file names and memory addresses for the initrd, kernel and generic.prm . The Linux installation system is also called the installation program in this book. You can use the following boot media only if Linux is to run as a guest operating system under z/VM: z/VM reader You can use the following boot media only if Linux is to run in LPAR mode: SE or HMC through a remote SFTP, FTPS or FTP server SE or HMC DVD You can use the following boot media for both z/VM and LPAR: DASD SCSI disk device that is attached through an FCP channel If you use DASD or an FCP-attached SCSI disk device as boot media, you must have a configured zipl boot loader. 11.6. Customizing boot parameters Before the installation can begin, you must configure some mandatory boot parameters. When installing through z/VM, these parameters must be configured before you boot into the generic.prm file. When installing on an LPAR, the rd.cmdline parameter is set to ask by default, meaning that you will be given a prompt on which you can enter these boot parameters. In both cases, the required parameters are the same. All network configuration can either be specified by using a parameter file, or at the prompt. Installation source An installation source must always be configured. Use the inst.repo option to specify the package source for the installation. Network devices Network configuration must be provided if network access will be required during the installation. If you plan to perform an unattended (Kickstart-based) installation by using only local media such as a disk, network configuration can be omitted. ip= Use the ip= option for basic network configuration, and other options as required. rd.znet= Also use the rd.znet= kernel option, which takes a network protocol type, a comma delimited list of sub-channels, and, optionally, comma delimited sysfs parameter and value pairs for qeth devices. This parameter can be specified multiple times to activate multiple network devices. For example: When specifying multiple rd.znet boot options, only the last one is passed on to the kernel command line of the installed system. This does not affect the networking of the system since all network devices configured during installation are properly activated and configured at boot. The qeth device driver assigns the same interface name for Ethernet and Hipersockets devices: enc <device number> . The bus ID is composed of the channel subsystem ID, subchannel set ID, and device number, separated by dots; the device number is the last part of the bus ID, without leading zeroes and dots. For example, the interface name will be enca00 for a device with the bus ID 0.0.0a00 . Storage devices At least one storage device must always be configured for text mode installations. The rd.dasd= option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. For example: The rd.zfcp= option takes a SCSI over FCP (zFCP) adapter device bus identifier, a target world wide port name (WWPN), and an FCP LUN, then activates one path to a SCSI disk. This parameter needs to be specified at least twice to activate multiple paths to the same disk. This parameter can be specified multiple times to activate multiple disks, each with multiple paths. Since 9, a target world wide port name (WWPN) and an FCP LUN have to be provided only if the zFCP device is not configured in NPIV mode or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter. It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified bus ID. This parameter needs to be specified at least twice to activate multiple paths to the same disks. Kickstart options If you are using a Kickstart file to perform an automatic installation, you must always specify the location of the Kickstart file using the inst.ks= option. For an unattended, fully automatic Kickstart installation, the inst.cmdline option is also useful. An example customized generic.prm file containing all mandatory parameters look similar to the following example: Example 11.1. Customized generic.prm file Some installation methods also require a file with a mapping of the location of installation data in the file system of the HMC DVD or FTP server and the memory locations where the data is to be copied. The file is typically named generic.ins , and contains file names for the initial RAM disk, kernel image, and parameter file ( generic.prm ) and a memory location for each file. An example generic.ins will look similar to the following example: Example 11.2. Sample generic.ins file A valid generic.ins file is provided by Red Hat along with all other files required to boot the installer. Modify this file only if you want to, for example, load a different kernel version than default. Additional resources For a list of all boot options to customize the installation program's behavior, see Boot options reference . 11.7. Parameters and configuration files on 64-bit IBM Z This section contains information about the parameters and configuration files on 64-bit IBM Z. 11.7.1. Required configuration file parameters on 64-bit IBM Z Several parameters are required and must be included in the parameter file. These parameters are also provided in the file generic.prm in directory images/ of the installation DVD. ro Mounts the root file system, which is a RAM disk, read-only. ramdisk_size= size Modifies the memory size reserved for the RAM disk to ensure that the Red Hat Enterprise Linux installation program fits within it. For example: ramdisk_size=40000 . The generic.prm file also contains the additional parameter cio_ignore=all,!condev . This setting speeds up boot and device detection on systems with many devices. The installation program transparently handles the activation of ignored devices. 11.7.2. 64-bit IBM z/VM configuration file Under z/VM, you can use a configuration file on a CMS-formatted disk. The purpose of the CMS configuration file is to save space in the parameter file by moving the parameters that configure the initial network setup, the DASD, and the FCP specification out of the parameter file. Each line of the CMS configuration file contains a single variable and its associated value, in the following shell-style syntax: variable = value . You must also add the CMSDASD and CMSCONFFILE parameters to the parameter file. These parameters point the installation program to the configuration file: CMSDASD= cmsdasd_address Where cmsdasd_address is the device number of a CMS-formatted disk that contains the configuration file. This is usually the CMS user's A disk. For example: CMSDASD=191 CMSCONFFILE= configuration_file Where configuration_file is the name of the configuration file. This value must be specified in lower case. It is specified in a Linux file name format: CMS_file_name . CMS_file_type . The CMS file REDHAT CONF is specified as redhat.conf . The CMS file name and the file type can each be from one to eight characters that follow the CMS conventions. For example: CMSCONFFILE=redhat.conf 11.7.3. Installation network, DASD and FCP parameters on 64-bit IBM Z These parameters can be used to automatically set up the preliminary network, and can be defined in the CMS configuration file. These parameters are the only parameters that can also be used in a CMS configuration file. All other parameters in other sections must be specified in the parameter file. NETTYPE=" type " Where type must be one of the following: qeth , lcs , or ctc . The default is qeth . Choose qeth for: OSA-Express features HiperSockets Virtual connections on z/VM, including VSWITCH and Guest LAN Select ctc for: Channel-to-channel network connections SUBCHANNELS=" device_bus_IDs " Where device_bus_IDs is a comma-separated list of two or three device bus IDs. The IDs must be specified in lowercase. Provides required device bus IDs for the various network interfaces: For example (a sample qeth SUBCHANNEL statement): PORTNO=" portnumber " You can add either PORTNO="0" (to use port 0) or PORTNO="1" (to use port 1 of OSA features with two ports per CHPID). LAYER2=" value " Where value can be 0 or 1 . Use LAYER2="0" to operate an OSA or HiperSockets device in layer 3 mode ( NETTYPE="qeth" ). Use LAYER2="1" for layer 2 mode. For virtual network devices under z/VM this setting must match the definition of the GuestLAN or VSWITCH to which the device is coupled. To use network services that operate on layer 2 (the Data Link Layer or its MAC sublayer) such as DHCP, layer 2 mode is a good choice. The qeth device driver default for OSA devices is now layer 2 mode. To continue using the default of layer 3 mode, set LAYER2="0" explicitly. VSWITCH=" value " Where value can be 0 or 1 . Specify VSWITCH="1" when connecting to a z/VM VSWITCH or GuestLAN, or VSWITCH="0" (or nothing at all) when using directly attached real OSA or directly attached real HiperSockets. MACADDR=" MAC_address " If you specify LAYER2="1" and VSWITCH="0" , you can optionally use this parameter to specify a MAC address. Linux requires six colon-separated octets as pairs lower case hex digits - for example, MACADDR=62:a3:18:e7:bc:5f . This is different from the notation used by z/VM. If you specify LAYER2="1" and VSWITCH="1" , you must not specify the MACADDR , because z/VM assigns a unique MAC address to virtual network devices in layer 2 mode. CTCPROT=" value " Where value can be 0 , 1 , or 3 . Specifies the CTC protocol for NETTYPE="ctc" . The default is 0 . HOSTNAME=" string " Where string is the host name of the newly-installed Linux instance. IPADDR=" IP " Where IP is the IP address of the new Linux instance. NETMASK=" netmask " Where netmask is the netmask. The netmask supports the syntax of a prefix integer (from 1 to 32) as specified in IPv4 classless interdomain routing (CIDR). For example, you can specify 24 instead of 255.255.255.0 , or 20 instead of 255.255.240.0 . GATEWAY=" gw " Where gw is the gateway IP address for this network device. MTU=" mtu " Where mtu is the Maximum Transmission Unit (MTU) for this network device. DNS=" server1:server2:additional_server_terms:serverN " Where " server1:server2:additional_server_terms:serverN " is a list of DNS servers, separated by colons. For example: SEARCHDNS=" domain1:domain2:additional_dns_terms:domainN " Where " domain1:domain2:additional_dns_terms:domainN " is a list of the search domains, separated by colons. For example: You only need to specify SEARCHDNS= if you specify the DNS= parameter. DASD= Defines the DASD or range of DASDs to configure for the installation. The installation program supports a comma-separated list of device bus IDs, or ranges of device bus IDs with the optional attributes ro , diag , erplog , and failfast . Optionally, you can abbreviate device bus IDs to device numbers with leading zeros stripped. Any optional attributes should be separated by colons and enclosed in parentheses. Optional attributes follow a device bus ID or a range of device bus IDs. The only supported global option is autodetect . This does not support the specification of non-existent DASDs to reserve kernel device names for later addition of DASDs. Use persistent DASD device names such as /dev/disk/by-path/name to enable transparent addition of disks later. Other global options such as probeonly , nopav , or nofcx are not supported by the installation program. Only specify those DASDs that need to be installed on your system. All unformatted DASDs specified here must be formatted after a confirmation later on in the installation program. Add any data DASDs that are not needed for the root file system or the /boot partition after installation. For example: FCP_ n =" device_bus_ID [ WWPN FCP_LUN ]" For FCP-only environments, remove the DASD= option from the CMS configuration file to indicate no DASD is present. Where: n is typically an integer value (for example FCP_1 or FCP_2 ) but could be any string with alphabetic or numeric characters or underscores. device_bus_ID specifies the device bus ID of the FCP device representing the host bus adapter (HBA) (for example 0.0.fc00 for device fc00). WWPN is the world wide port name used for routing (often in conjunction with multipathing) and is as a 16-digit hex value (for example 0x50050763050b073d ). FCP_LUN refers to the storage logical unit identifier and is specified as a 16-digit hexadecimal value padded with zeroes to the right (for example 0x4020400100000000 ). Note A target world wide port name (WWPN) and an FCP_LUN have to be provided if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-9.0 or older releases. Otherwise only the device_bus_ID value is mandatory. These variables can be used on systems with FCP devices to activate FCP LUNs such as SCSI disks. Additional FCP LUNs can be activated during the installation interactively or by means of a Kickstart file. An example value looks similar to the following: Each of the values used in the FCP parameters (for example FCP_1 or FCP_2 ) are site-specific and are normally supplied by the FCP storage administrator. 11.7.4. Miscellaneous parameters on 64-bit IBM Z The following parameters can be defined in a parameter file but do not work in a CMS configuration file. rd.live.check Turns on testing of an ISO-based installation source; for example, when using inst.repo= with an ISO on local disk or mounted with NFS. inst.nompath Disables support for multipath devices. inst.proxy=[ protocol ://][ username [: password ]@] host [: port ] Specify a proxy to use with installation over HTTP, HTTPS or FTP. inst.rescue Boot into a rescue system running from a RAM disk that can be used to fix and restore an installed system. inst.stage2= URL Specifies a path to a tree containing install.img , not to the install.img directly. Otherwise, follows the same syntax as inst.repo= . If inst.stage2 is specified, it typically takes precedence over other methods of finding install.img . However, if Anaconda finds install.img on local media, the inst.stage2 URL will be ignored. If inst.stage2 is not specified and install.img cannot be found locally, Anaconda looks to the location given by inst.repo= or method= . If only inst.stage2= is given without inst.repo= or method= , Anaconda uses whatever repos the installed system would have enabled by default for installation. Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. The HTTP, HTTPS or FTP paths are then tried sequentially until one succeeds: inst.syslog= IP/hostname [: port ] Sends log messages to a remote syslog server. The boot parameters described here are the most useful for installations and trouble shooting on 64-bit IBM Z, but only a subset of those that influence the installation program. 11.7.5. Sample parameter file and CMS configuration file on 64-bit IBM Z To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ): 11.7.6. Using parameter and configuration files on 64-bit IBM Z The 64-bit IBM Z architecture can use a customized parameter file to pass boot parameters to the kernel and the installation program. You need to change the parameter file if you want to: Install unattended with Kickstart. Choose non-default installation settings that are not accessible through the installation program's interactive user interface, such as rescue mode. The parameter file can be used to set up networking non-interactively before the installation program ( Anaconda ) starts. The kernel parameter file is limited to 3754 bytes plus an end-of-line character. The parameter file can be variable or fixed record format. Fixed record format increases the file size by padding each line up to the record length. Should you encounter problems with the installation program not recognizing all specified parameters in LPAR environments, you can try to put all parameters in one single line or start and end each line with a space character. The parameter file contains kernel parameters, such as ro , and parameters for the installation process, such as vncpassword=test or vnc . 11.8. Preparing an installation in a z/VM guest virtual machine Use the x3270 or c3270 terminal emulator, to log in to z/VM from other Linux systems, or use the IBM 3270 terminal emulator on the 64-bit IBM Z Hardware Management Console (HMC). If you are running Microsoft Windows operating system, there are several options available, and can be found through an internet search. A free native Windows port of c3270 called wc3270 also exists. Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent installing RHEL. See the IBM documentation . Procedure Log on to the z/VM guest virtual machine chosen for the Linux installation. optional: If your 3270 connection is interrupted and you cannot log in again because the session is still active, you can replace the old session with a new one by entering the following command on the z/VM logon screen: + Replace user with the name of the z/VM guest virtual machine. Depending on whether an external security manager, for example RACF, is used, the logon command might vary. If you are not already running CMS (single-user operating system shipped with z/VM) in your guest, boot it now by entering the command: Be sure not to use CMS disks such as your A disk (often device number 0191) as installation targets. To find out which disks are in use by CMS, use the following query: You can use the following CP (z/VM Control Program, which is the z/VM hypervisor) query commands to find out about the device configuration of your z/VM guest virtual machine: Query the available main memory, which is called storage in 64-bit IBM Z terminology. Your guest should have at least 1 GiB of main memory. Query available network devices by type: osa OSA - CHPID type OSD, real or virtual (VSWITCH or GuestLAN), both in QDIO mode hsi HiperSockets - CHPID type IQD, real or virtual (GuestLAN type Hipers) lcs LCS - CHPID type OSE For example, to query all of the network device types mentioned above, run: Query available DASDs. Only those that are flagged RW for read-write mode can be used as installation targets: Query available FCP devices (vHBAs):
[ "rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>", "rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207", "rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000", "ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart", "images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408", "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"", "inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/", "ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents", "NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"", "logon user here", "cp ipl cms", "query disk", "cp query virtual storage", "cp query virtual osa", "cp query virtual dasd", "cp query virtual fcp" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/preparing-a-rhel-installation-on-64-bit-ibm-z_rhel-installer
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/making-open-source-more-inclusive
Chapter 8. Removing RHEL 9 content
Chapter 8. Removing RHEL 9 content In the following sections, learn how to remove content in Red Hat Enterprise Linux 9 by using DNF . 8.1. Removing installed packages You can use DNF to remove a single package or multiple packages installed on your system. If any of the packages you choose to remove have unused dependencies, DNF uninstalls these dependencies as well. Procedure Remove particular packages: 8.2. Removing package groups Package groups bundle multiple packages. You can use package groups to remove all packages assigned to a group in a single step. Procedure Remove package groups by the group name or group ID: 8.3. Removing installed modular content When removing installed modular content, you can remove packages from either a selected profile or the whole stream . Important DNF tries to remove all packages with a name corresponding to the packages installed with a profile or a stream, including their dependent packages. Always check the list of packages to be removed before you proceed, especially if you have enabled custom repositories on your system. 8.3.1. Removing packages from an installed profile When you remove packages installed with a profile, all packages with a name corresponding to the packages installed by the profile are removed. This includes their dependencies, with the exception of packages required by a different profile. To remove all packages from a selected stream, complete the steps in Removing all packages from a module stream . Prerequisites The selected profile is installed by using the dnf module install <module-name:stream/profile> command or as a default profile by using the dnf install <module-name:stream command> . Procedure Uninstall packages that belong to the selected profile: For example, to remove packages and their dependencies from the development profile of the nodejs:18 module stream, enter: Warning Check the list of packages under Removing: and Removing unused dependencies: before you proceed with the removal transaction. This transaction removes requested packages, unused dependencies, and dependent packages, which might result in the system failure. Alternatively, uninstall packages from all installed profiles within a stream: Note These operations will not remove packages from the stream that do not belong to any of the profiles. Verification Verify that the correct profile was removed: All profiles except development are currently installed ( [i] ). Additional resources Modular dependencies and stream changes 8.3.2. Removing all packages from a module stream When you remove packages installed with a module stream, all packages with a name corresponding to the packages installed by the stream are removed. This includes their dependencies, with the exception of packages required by other modules. To remove only packages from a selected profile, complete the steps in Removing packages from an installed profile . Prerequisites The module stream is enabled and at least some packages from the stream have been installed. Procedure Remove all packages from a selected stream: For example, to remove all packages from the nodejs:18 module stream, enter: Warning Check the list of packages under Removing: and Removing unused dependencies: before you proceed with the removal transaction. This transaction removes requested packages, unused dependencies, and dependent packages, which might result in the system failure. Optional: Reset or disable the stream by entering one of the following commands: Verification Verify that all packages from the selected module stream were removed: Additional resources Modular dependencies and stream changes Resetting module streams Disabling all streams of a module 8.4. Additional resources Commands for removing content in RHEL 9
[ "dnf remove <package_name_1> <package_name_2>", "dnf group remove <group_name> <group_id>", "dnf module remove <module-name:stream/profile>", "dnf module remove nodejs:18/development (...) Dependencies resolved. ======================================================================== Package Architecture Version Repository Size ======================================================================== Removing: nodejs-devel x86_64 1:18.7.0-1.module+el9.1.0+16284+4fdefb2f @rhel-AppStream 950 k Removing unused dependencies: brotli x86_64 1.0.9-6.el9 @rhel-AppStream 754 k brotli-devel x86_64 1.0.9-6.el9 @rhel-AppStream 55 k Disabling module profiles: nodejs/development Transaction Summary ======================================================================== Remove 26 Packages Freed space: 8.3 M Is this ok [y/N]: y", "dnf module remove module-name:stream", "dnf module info nodejs Name : nodejs Stream : 18 [e] [a] Version : 9010020221009220316 Context : rhel9 Architecture : x86_64 Profiles : common [d] [i], development, minimal [i], s2i [i] Default profiles : common Repo : rhel-AppStream Summary : Javascript runtime Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled, [a]ctive", "dnf module remove --all <module_name:stream>", "dnf module remove --all nodejs:18 (...) Dependencies resolved. =================================================================================== Package Architecture Version Repository Size =================================================================================== Removing: nodejs x86_64 1:18.10.0-3.module+el9.1.0+16866+0fab0697 @rhel-AppStream 43 M nodejs-devel x86_64 1:18.10.0-3.module+el9.1.0+16866+0fab0697 @rhel-AppStream 953 k nodejs-docs noarch 1:18.10.0-3.module+el9.1.0+16866+0fab0697 @rhel-AppStream 78 M nodejs-full-i18n x86_64 1:18.10.0-3.module+el9.1.0+16866+0fab0697 @rhel-AppStream 29 M nodejs-nodemon noarch 2.0.15-1.module+el9.1.0+15718+e52ec601 @rhel-AppStream 2.0 M nodejs-packaging noarch 2021.06-4.module+el9.1.0+15718+e52ec601 @rhel-AppStream 41 k npm x86_64 1:8.19.2-1.18.10.0.3.module+el9.1.0+16866+0fab0697 @rhel-AppStream 6.9 M Removing unused dependencies: brotli x86_64 1.0.9-6.el9 @rhel-AppStream 754 k brotli-devel x86_64 1.0.9-6.el9 @rhel-AppStream 55 k Disabling module profiles: nodejs/common nodejs/development nodejs/minimal nodejs/s2i Transaction Summary =================================================================================== Remove 31 Packages Freed space: 167 M Is this ok [y/N]: y", "dnf module reset <module_name> dnf module disable <module_name>", "dnf module info nodejs Name : nodejs Stream : 18 [e] [a] Version : 9010020221009220316 Context : rhel9 Architecture : x86_64 Profiles : common [d], development, minimal, s2i Default profiles : common Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled, [a]ctive" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_removing-rhel-9-content_managing-software-with-the-dnf-tool
Chapter 2. Running Red Hat Quay in debug mode
Chapter 2. Running Red Hat Quay in debug mode Red Hat recommends gathering your debugging information when opening a support case. Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Additionally, it helps the Red Hat Support to perform a root cause analysis. 2.1. Red Hat Quay debug variables Red Hat Quay offers two configuration fields that can be added to your config.yaml file to help diagnose issues or help obtain log information. Table 2.1. Debug configuration variables Variable Type Description DEBUGLOG Boolean Whether to enable or disable debug logs. Must be true or false . USERS_DEBUG Integer. Either 0 or 1 . Used to debug LDAP operations in clear text, including passwords. Must be used with DEBUGLOG=TRUE . Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. 2.2. Running a standalone Red Hat Quay deployment in debug mode Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution. Use the following procedure to run a standalone deployment of Red Hat Quay in debug mode. Procedure Enter the following command to run your standalone Red Hat Quay deployment in debug mode: USD podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv} To view the debug logs, enter the following command: USD podman logs <quay_container_name> 2.3. Running an LDAP Red Hat Quay deployment in debug mode Use the following procedure to run an LDAP deployment of Red Hat Quay in debug mode. Procedure Enter the following command to run your LDAP Red Hat Quay deployment in debug mode: USD podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -e USERS_DEBUG=1 -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv} To view the debug logs, enter the following command: USD podman logs <quay_container_name> Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. 2.4. Running the Red Hat Quay Operator in debug mode Use the following procedure to run the Red Hat Quay Operator in debug mode. Procedure Enter the following command to edit the QuayRegistry custom resource definition: USD oc edit quayregistry <quay_registry_name> -n <quay_namespace> Update the QuayRegistry to add the following parameters: spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" After the Red Hat Quay Operator has restarted with debugging enabled, try pulling an image from the registry. If it is still slow, dump all logs from all Quay pods to a file, and check the files for more information.
[ "podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv}", "podman logs <quay_container_name>", "podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -e USERS_DEBUG=1 -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv}", "podman logs <quay_container_name>", "oc edit quayregistry <quay_registry_name> -n <quay_namespace>", "spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: \"true\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/troubleshooting_red_hat_quay/running-quay-debug-mode-intro
Chapter 4. Network considerations
Chapter 4. Network considerations Review the strategies for redirecting your application network traffic after migration. 4.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 4.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 4.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 4.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP.
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/planning-considerations-3-4
Chapter 4. Management of monitors using the Ceph Orchestrator
Chapter 4. Management of monitors using the Ceph Orchestrator As a storage administrator, you can deploy additional monitors using placement specification, add monitors using service specification, add monitors to a subnet configuration, and add monitors to specific hosts. Apart from this, you can remove the monitors using the Ceph Orchestrator. By default, a typical Red Hat Ceph Storage cluster has three or five monitor daemons deployed on different hosts. Red Hat recommends deploying five monitors if there are five or more nodes in a cluster. Note Red Hat recommends deploying three monitors when Ceph is deployed with the OSP director. Ceph deploys monitor daemons automatically as the cluster grows, and scales back monitor daemons automatically as the cluster shrinks. The smooth execution of this automatic growing and shrinking depends upon proper subnet configuration. If your monitor nodes or your entire cluster are located on a single subnet, then Cephadm automatically adds up to five monitor daemons as you add new hosts to the cluster. Cephadm automatically configures the monitor daemons on the new hosts. The new hosts reside on the same subnet as the bootstrapped host in the storage cluster. Cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. 4.1. Ceph Monitors Ceph Monitors are lightweight processes that maintain a master copy of the storage cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the storage cluster map, enabling clients to bind to a pool and read and write data. Ceph Monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the storage cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum, thus establishing consensus. Important Red Hat requires at least three monitors on separate hosts to receive support for a production cluster. Red Hat recommends deploying an odd number of monitors. An odd number of Ceph Monitors has a higher resilience to failures than an even number of monitors. For example, to maintain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures; with three monitors, one failure; with four monitors, one failure; with five monitors, two failures. This is why an odd number is advisable. Summarizing, Ceph needs a majority of monitors to be running and to be able to communicate with each other, two out of three, three out of four, and so on. For an initial deployment of a multi-node Ceph storage cluster, Red Hat requires three monitors, increasing the number two at a time if a valid need for more than three monitors exists. Since Ceph Monitors are lightweight, it is possible to run them on the same host as OpenStack nodes. However, Red Hat recommends running monitors on separate hosts. Important Red Hat ONLY supports collocating Ceph services in containerized environments. When you remove monitors from a storage cluster, consider that Ceph Monitors use the Paxos protocol to establish a consensus about the master storage cluster map. You must have a sufficient number of Ceph Monitors to establish a quorum. Additional Resources See the Red Hat Ceph Storage Supported configurations Knowledgebase article for all the supported Ceph configurations. 4.2. Configuring monitor election strategy The monitor election strategy identifies the net splits and handles failures. You can configure the election monitor strategy in three different modes: classic - This is the default mode in which the lowest ranked monitor is voted based on the elector module between the two sites. disallow - This mode lets you mark monitors as disallowed, in which case they will participate in the quorum and serve clients, but cannot be an elected leader. This lets you add monitors to a list of disallowed leaders. If a monitor is in the disallowed list, it will always defer to another monitor. connectivity - This mode is mainly used to resolve network discrepancies. It evaluates connection scores, based on pings that check liveness, provided by each monitor for its peers and elects the most connected and reliable monitor to be the leader. This mode is designed to handle net splits, which may happen if your cluster is stretched across multiple data centers or otherwise susceptible. This mode incorporates connection score ratings and elects the monitor with the best score. If a specific monitor is desired to be the leader, configure the election strategy so that the specific monitor is the first monitor in the list with a rank of 0 . Red Hat recommends you to stay in the classic mode unless you require features in the other modes. Before constructing the cluster, change the election_strategy to classic , disallow , or connectivity in the following command: Syntax 4.3. Deploying the Ceph monitor daemons using the command line interface The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the placement specification in the command line interface. To deploy a different number of monitor daemons, specify a different number. If you do not specify the hosts where the monitor daemons should be deployed, the Ceph Orchestrator randomly selects the hosts and deploys the monitor daemons to them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example There are four different ways of deploying Ceph monitor daemons: Method 1 Use placement specification to deploy monitors on hosts: Note Red Hat recommends that you use the --placement option to deploy on specific hosts. Syntax Example Note Be sure to include the bootstrap node as the first node in the command. Important Do not add the monitors individually as ceph orch apply mon supersedes and will not add the monitors to all the hosts. For example, if you run the following commands, then the first command creates a monitor on host01 . Then the second command supersedes the monitor on host1 and creates a monitor on host02 . Then the third command supersedes the monitor on host02 and creates a monitor on host03 . Eventually, there is a monitor only on the third host. Method 2 Use placement specification to deploy specific number of monitors on specific hosts with labels: Add the labels to the hosts: Syntax Example Deploy the daemons: Syntax Example Method 3 Use placement specification to deploy specific number of monitors on specific hosts: Syntax Example Method 4 Deploy monitor daemons randomly on the hosts in the storage cluster: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.4. Deploying the Ceph monitor daemons using the service specification The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the service specification, like a YAML format file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Create the mon.yaml file: Example Edit the mon.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the monitor daemons: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.5. Deploying the monitor daemons on specific network using the Ceph Orchestrator The Ceph Orchestrator deploys one monitor daemon by default. You can explicitly specify the IP address or CIDR network for each monitor and control where each monitor is placed. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example Disable automated monitor deployment: Example Deploy monitors on hosts on specific network: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.6. Removing the monitor daemons using the Ceph Orchestrator To remove the monitor daemons from the host, you can just redeploy the monitor daemons on other hosts. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. At least one monitor daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example Run the ceph orch apply command to deploy the required monitor daemons: Syntax If you want to remove monitor daemons from host02 , then you can redeploy the monitors on other hosts. Example Verification List the hosts,daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph monitor daemons using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph monitor daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 4.7. Removing a Ceph Monitor from an unhealthy storage cluster You can remove a ceph-mon daemon from an unhealthy storage cluster. An unhealthy storage cluster is one that has placement groups persistently in not active + clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one running Ceph Monitor node. Procedure Identify a surviving monitor and log into the host: Syntax Example Log in to each Ceph Monitor host and stop all the Ceph Monitors: Syntax Example Set up the environment suitable for extended daemon maintenance and to run the daemon interactively: Syntax Example Extract a copy of the monmap file: Syntax Example Remove the non-surviving Ceph Monitor(s): Syntax Example Inject the surviving monitor map with the removed monitor(s) into the surviving Ceph Monitor: Syntax Example Start only the surviving monitors: Syntax Example Verify the monitors form a quorum: Example Optional: Archive the removed Ceph Monitor's data directory in /var/lib/ceph/ CLUSTER_FSID /mon. HOSTNAME directory.
[ "ceph mon set election_strategy {classic|disallow|connectivity}", "cephadm shell", "ceph orch apply mon --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"host01 host02 host03\"", "ceph orch apply mon host01 ceph orch apply mon host02 ceph orch apply mon host03", "ceph orch host label add HOSTNAME_1 LABEL", "ceph orch host label add host01 mon", "ceph orch apply mon --placement=\" HOST_NAME_1 :mon HOST_NAME_2 :mon HOST_NAME_3 :mon\"", "ceph orch apply mon --placement=\"host01:mon host02:mon host03:mon\"", "ceph orch apply mon --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mon --placement=\"3 host01 host02 host03\"", "ceph orch apply mon NUMBER_OF_DAEMONS", "ceph orch apply mon 3", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "touch mon.yaml", "service_type: mon placement: hosts: - HOST_NAME_1 - HOST_NAME_2", "service_type: mon placement: hosts: - host01 - host02", "cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml", "cd /var/lib/ceph/mon/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mon.yaml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon --unmanaged", "ceph orch daemon add mon HOST_NAME_1 : IP_OR_NETWORK", "ceph orch daemon add mon host03:10.1.2.123", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "cephadm shell", "ceph orch apply mon \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"", "ceph orch apply mon \"2 host01 host03\"", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mon", "ssh root@ MONITOR_ID", "ssh root@host00", "cephadm unit --name DAEMON_NAME . HOSTNAME stop", "cephadm unit --name mon.host00 stop", "cephadm shell --name DAEMON_NAME . HOSTNAME", "cephadm shell --name mon.host00", "ceph-mon -i HOSTNAME --extract-monmap TEMP_PATH", "ceph-mon -i host01 --extract-monmap /tmp/monmap 2022-01-05T11:13:24.440+0000 7f7603bd1700 -1 wrote monmap to /tmp/monmap", "monmaptool TEMPORARY_PATH --rm HOSTNAME", "monmaptool /tmp/monmap --rm host01", "ceph-mon -i HOSTNAME --inject-monmap TEMP_PATH", "ceph-mon -i host00 --inject-monmap /tmp/monmap", "cephadm unit --name DAEMON_NAME . HOSTNAME start", "cephadm unit --name mon.host00 start", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-monitors-using-the-ceph-orchestrator
Chapter 1. Overview of Red Hat Ansible Automation Platform
Chapter 1. Overview of Red Hat Ansible Automation Platform Red Hat Ansible Automation Platform simplifies the development and operation of automation workloads for managing enterprise application infrastructure lifecycles. Ansible Automation Platform works across multiple IT domains, including operations, networking, security, and development, as well as across diverse hybrid environments. Simple to adopt, use, and understand, Ansible Automation Platform provides the tools needed to rapidly implement enterprise-wide automation, no matter where you are in your automation journey. 1.1. What is included in the Ansible Automation Platform Ansible Automation Platform Automation controller Automation hub Event-Driven Ansible controller Insights for Ansible Automation Platform Platform gateway (Unified UI) 2.5 4.6.0 4.10.0 hosted service 1.1.0 hosted service 1.1 1.2. Red Hat Ansible Automation Platform life cycle Red Hat provides different levels of maintenance for each Ansible Automation Platform release. For more information, see Red Hat Ansible Automation Platform Life Cycle .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/release_notes/platform-introduction
Chapter 12. Connecting cloud integrations to the subscriptions service
Chapter 12. Connecting cloud integrations to the subscriptions service Data collection for certain pay-as-you-go On-Demand subscriptions requires a connection known as a cloud integration, configured with the integrations service of the Hybrid Cloud Console. A cloud integration on the Red Hat Hybrid Cloud Console is a connection to a service, application, or provider that supplies data to another Hybrid Cloud Console service. Through a cloud integration, the connected service can connect with and use data from public cloud providers and other services or tools to collect data for that service. Note When a cloud integration is required to track the usage of a subscription, the post-purchase enablement steps generally include information about this requirement. These post-purchase enablement instructions might contain more current information about setting up the cloud integration. The following products require the configuration of a cloud integration to enable data collection for the subscriptions service: Red Hat Enterprise Linux for Third Party Linux Migration with Extended Life Cycle Support Add-on, metered with the the cost management service method If you want to use the cost management service to meter the usage of RHEL for Third Party Linux Migration with ELS in the subscriptions service, you must create a cloud integration. The configuration of a cloud integration for includes a creating a connection between a cloud provider and the cost management service in the Hybrid Cloud Console. This cloud integration ensures that usage data from the cloud provider and from the cost management service is used to calculate metered usage in the subscriptions service and that usage data is returned to the cloud provider for billing purposes. Procedure For Red Hat Enterprise Linux for Third Party Linux Migration with Extended Life Cycle Support Add-on The post-purchase enablement steps for RHEL for Third Party Linux Migration with ELS include information about setting up the required cloud integration, in addition to other setup information required for the subscription. To ensure that your cloud integration is configured correctly for use by the subscriptions service, review the following information and confirm that the cloud integration configuration steps were completed: For more information about the RHEL for Third Party Linux Migration with ELS post-purchase enablement steps, including the step to set up a cloud integration, see the Getting Started with Red Hat Enterprise Linux for Third Party Linux Migration Customer Portal support article. For more information about cloud integrations, see Configuring cloud integrations for Red Hat services For more information about setting up a cloud integration for the cost management service and a specific cloud platform, see Adding integrations to cost management in the cost management documentation.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-connecting-cloud-integrations-to-subscriptionwatch_assembly-setting-up-subscriptionwatch-ctxt
OpenShift sandboxed containers
OpenShift sandboxed containers OpenShift Container Platform 4.18 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/openshift_sandboxed_containers/index
probe::ioscheduler.elv_next_request.return
probe::ioscheduler.elv_next_request.return Name probe::ioscheduler.elv_next_request.return - Fires when a request retrieval issues a return signal Synopsis ioscheduler.elv_next_request.return Values disk_major Disk major number of the request disk_minor Disk minor number of the request rq_flags Request flags rq Address of the request name Name of the probe point
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-elv-next-request-return
Chapter 26. Has Header Filter Action
Chapter 26. Has Header Filter Action Filter based on the presence of one header 26.1. Configuration Options The following table summarizes the configuration options available for the has-header-filter-action Kamelet: Property Name Description Type Default Example name * Header Name The header name to evaluate. The header name must be passed by the source Kamelet. For Knative only, if you are using Cloud Events, you must include the CloudEvent (ce-) prefix in the header name. string "headerName" Note Fields marked with an asterisk (*) are mandatory. 26.2. Dependencies At runtime, the has-header-filter-action Kamelet relies upon the presence of the following dependencies: camel:core camel:kamelet 26.3. Usage This section describes how you can use the has-header-filter-action . 26.3.1. Knative Action You can use the has-header-filter-action Kamelet as an intermediate step in a Knative binding. has-header-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "my-header" value: "my-value" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: has-header-filter-action properties: name: "my-header" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 26.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 26.3.1.2. Procedure for using the cluster CLI Save the has-header-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f has-header-filter-action-binding.yaml 26.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 26.3.2. Kafka Action You can use the has-header-filter-action Kamelet as an intermediate step in a Kafka binding. has-header-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "my-header" value: "my-value" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: has-header-filter-action properties: name: "my-header" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 26.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 26.3.2.2. Procedure for using the cluster CLI Save the has-header-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f has-header-filter-action-binding.yaml 26.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 26.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/has-header-filter-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"my-header\" value: \"my-value\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: has-header-filter-action properties: name: \"my-header\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f has-header-filter-action-binding.yaml", "kamel bind --name has-header-filter-action-binding timer-source?message=\"Hello\" --step insert-header-action -p \"step-0.name=my-header\" -p \"step-0.value=my-value\" --step has-header-filter-action -p \"step-1.name=my-header\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"my-header\" value: \"my-value\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: has-header-filter-action properties: name: \"my-header\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f has-header-filter-action-binding.yaml", "kamel bind --name has-header-filter-action-binding timer-source?message=\"Hello\" --step insert-header-action -p \"step-0.name=my-header\" -p \"step-0.value=my-value\" --step has-header-filter-action -p \"step-1.name=my-header\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/has-header-filter-action
E.2.5. /proc/devices
E.2.5. /proc/devices This file displays the various character and block devices currently configured (not including devices whose modules are not loaded). Below is a sample output from this file: The output from /proc/devices includes the major number and name of the device, and is broken into two major sections: Character devices and Block devices . Character devices are similar to block devices , except for two basic differences: Character devices do not require buffering. Block devices have a buffer available, allowing them to order requests before addressing them. This is important for devices designed to store information - such as hard drives - because the ability to order the information before writing it to the device allows it to be placed in a more efficient order. Character devices send data with no preconfigured size. Block devices can send and receive information in blocks of a size configured per device. For more information about devices, see the devices.txt file in the kernel-doc package (see Section E.5, "Additional Resources" ).
[ "Character devices: 1 mem 4 /dev/vc/0 4 tty 4 ttyS 5 /dev/tty 5 /dev/console 5 /dev/ptmx 7 vcs 10 misc 13 input 29 fb 36 netlink 128 ptm 136 pts 180 usb Block devices: 1 ramdisk 3 ide0 9 md 22 ide1 253 device-mapper 254 mdp" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-devices
probe::ioblock.request
probe::ioblock.request Name probe::ioblock.request - Fires whenever making a generic block I/O request. Synopsis ioblock.request Values sector beginning sector for the entire bio name name of the probe point devname block device name phys_segments number of segments in this bio after physical address coalescing is performed flags see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported hw_segments number of segments after physical and DMA remapping hardware coalescing is performed bdev_contains points to the device object which contains the partition (when bio structure represents a partition) vcnt bio vector count which represents number of array element (page, offset, length) which make up this I/O request idx offset into the bio vector array bdev target block device p_start_sect points to the start sector of the partition structure of the device size total size in bytes ino i-node number of the mapped file rw binary trace for read/write request Context The process makes block I/O request
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioblock-request
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1]
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. 3.1.1. .spec Description ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. Type object Required hrefTemplate text Property Type Description hrefTemplate string hrefTemplate is an absolute secure URL (must use https) for the log link including variables to be replaced. Variables are specified in the URL with the format USD{variableName}, for instance, USD{containerName} and will be replaced with the corresponding values from the resource. Resource is a pod. Supported variables are: - USD{resourceName} - name of the resource which containes the logs - USD{resourceUID} - UID of the resource which contains the logs - e.g. 11111111-2222-3333-4444-555555555555 - USD{containerName} - name of the resource's container that contains the logs - USD{resourceNamespace} - namespace of the resource that contains the logs - USD{resourceNamespaceUID} - namespace UID of the resource that contains the logs - USD{podLabels} - JSON representation of labels matching the pod with the logs - e.g. {"key1":"value1","key2":"value2"} e.g., https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} namespaceFilter string namespaceFilter is a regular expression used to restrict a log link to a matching set of namespaces (e.g., ^openshift- ). The string is converted into a regular expression using the JavaScript RegExp constructor. If not specified, links will be displayed for all the namespaces. text string text is the display text for the link 3.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleexternalloglinks DELETE : delete collection of ConsoleExternalLogLink GET : list objects of kind ConsoleExternalLogLink POST : create a ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name} DELETE : delete a ConsoleExternalLogLink GET : read the specified ConsoleExternalLogLink PATCH : partially update the specified ConsoleExternalLogLink PUT : replace the specified ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status GET : read status of the specified ConsoleExternalLogLink PATCH : partially update status of the specified ConsoleExternalLogLink PUT : replace status of the specified ConsoleExternalLogLink 3.2.1. /apis/console.openshift.io/v1/consoleexternalloglinks HTTP method DELETE Description delete collection of ConsoleExternalLogLink Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleExternalLogLink Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleExternalLogLink Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 202 - Accepted ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.2. /apis/console.openshift.io/v1/consoleexternalloglinks/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method DELETE Description delete a ConsoleExternalLogLink Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleExternalLogLink Table 3.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleExternalLogLink Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleExternalLogLink Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.3. /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method GET Description read status of the specified ConsoleExternalLogLink Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleExternalLogLink Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleExternalLogLink Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/console_apis/consoleexternalloglink-console-openshift-io-v1
Chapter 1. Introduction to Directory Server Performance Tuning
Chapter 1. Introduction to Directory Server Performance Tuning This article provides some procedures and options that administrators can use to optimize the performance of their Red Hat Directory Server deployments. Performance tuning a Directory Server instance is unique to each server because of differences for every server in its machine environment, directory size and data type, load and network use, even the types of operations that users and clients perform. The purpose of this guide is to highlight the features that Red Hat Directory Server provides for tracking and assessing server and database performance. There are also some procedures given to help tune server performance. For more in-depth planning information, however, check out the Red Hat Directory Server Deployment Guide , and for for command-line and UI-based administrative instructions, see the Red Hat Directory Server Administration Guide . 1.1. Setting Goals for Directory Server Performance Performance tuning is simply a way to identify potential (or real) bottlenecks in the normal operating environment of the server and then taking steps to mitigate those bottlenecks. The general plan for performance tuning is: Assess the environment. Look at everything around the Directory Server: its usage, the load, the network connection and reliability, most common operations, the physical machine its on, along with any services competing for its resources. Measure the current Directory Server performance and establish baselines. Identify the server areas which can be improved. Make any changes to the Directory Server settings and, potentially, to the host machine. Measure the Directory Server performance again to see how the changes affected the performance. Directory Server provides some sort of monitoring in three areas: The server process (counters and logs) The databases (counters) Any database links (counters) In the Directory Server, most performance measurements are going to be how well the Directory Server retrieves and delivers information to clients. With that in mind, these are the server areas that can be tuned for the best Directory Server performance (and these are the areas covered in this article): Search operations Indexing performance (which affects both search and write operations) Database transactions Database and entry cache settings Database links Other changes can be made to the host machine's settings or hardware which can also affect Directory Server performance: Available memory (based on directory size) Other servers running on the same machine (which could compete for resources) Distributing user databases across other Directory Server instances on other machines Balancing server loads due to network performance These changes relate much more to planning an effective Directory Server deployment than changes that can be made to an instance. Reviewing the Deployment Guide can provide more detail about how to plan an optimal enterprise deployment.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-preface
Chapter 3. Ruby Examples
Chapter 3. Ruby Examples 3.1. Connecting to the Red Hat Virtualization Manager The Connection class is the entry point of the software development kit. It provides access to the services of the Red Hat Virtualization Manager's REST API. The parameters of the Connection class are: url - Base URL of the Red Hat Virtualization Manager API username password ca_file - PEM file containing the trusted CA certificates. The ca.pem file is required when connecting to a server protected by TLS. If you do not specify the ca_file , the system-wide CA certificate store is used. Connecting to the Red Hat Virtualization Manager connection = OvirtSDK4::Connection.new( url: 'https://engine.example.com/ovirt-engine/api', username: 'admin@internal', password: '...', ca_file: 'ca.pem', ) Important The connection holds critical resources, including a pool of HTTP connections to the server and an authentication token. You must free these resources when they are no longer in use: The connection, and all the services obtained from it, cannot be used after the connection has been closed. If the connection fails, the software development kit will raise an Error exception, containing details of the failure. For more information, see Connection:initialize .
[ "connection = OvirtSDK4::Connection.new( url: 'https://engine.example.com/ovirt-engine/api', username: 'admin@internal', password: '...', ca_file: 'ca.pem', )", "connection.close" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/chap-Ruby_Examples
Machine management
Machine management OpenShift Container Platform 4.18 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: spotMarketOptions: {}", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h", "oc get machines -n openshift-machine-api | grep worker", "preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h", "oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"", "oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -", "10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",", "oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json", "machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created", "oc -n openshift-machine-api get machinesets | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s", "oc -n openshift-machine-api get machines | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: spotVMOptions: {}", "oc edit machineset <machine-set-name>", "providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4", "oc create -f <machine-set-config>.yaml", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc create -f <machine-set-name>.yaml", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m", "oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml", "cat machineset-azure.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "cp machineset-azure.yaml machineset-azure-gpu.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "diff machineset-azure.yaml machineset-azure-gpu.yaml", "14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3", "oc create -f machineset-azure-gpu.yaml", "machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.31.3 myclustername-master-1 Ready control-plane,master 6h41m v1.31.3 myclustername-master-2 Ready control-plane,master 6h39m v1.31.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.31.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.31.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.31.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.31.3", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc create -f machineset-azure-gpu.yaml", "get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc get machineset -n openshift-machine-api | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "providerSpec: value: preemptible: true", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3", "providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5", "machineType: a2-highgpu-1g onHostMaintenance: Terminate", "{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.31.3", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h", "oc get machines -n openshift-machine-api | grep worker", "myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h", "oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.machineType ocp_4.18_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"", "\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",", "oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.18_machineset-a2-highgpu-1g.json -", "15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",", "oc create -f ocp_4.18_machineset-a2-highgpu-1g.json", "machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created", "oc -n openshift-machine-api get machinesets | grep gpu", "myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: 11 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 12 userDataSecret: name: <user_data_secret> 13 vcpuSockets: 4 14 vcpusPerSocket: 1 15", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9", "oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: true 5", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions", "urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2", "oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: network: devices: 1 - networkName: \"<vm_network_name_1>\" - networkName: \"<vm_network_name_2>\" template: <vm_template_name> 2 workspace: datacenter: <vcenter_data_center_name> 3 datastore: <vcenter_datastore_name> 4 folder: <vcenter_vm_folder_path> 5 resourcepool: <vsphere_resource_pool> 6 server: <vcenter_server_ip> 7", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s", "oc get machine -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: <hook_name> 1 owner: <hook_owner> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preTerminate: - name: <hook_name> 1 owner: <hook_owner> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: 1 - name: MigrateImportantApp owner: my-app-migration-controller preTerminate: 2 - name: BackupFileSystem owner: my-backup-controller - name: CloudProviderSpecialCase owner: my-custom-storage-detach-controller 3 - name: WaitForStorageDetach owner: my-custom-storage-detach-controller", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17 expanders: [\"Random\"] 18", "oc get machinesets.machine.openshift.io", "NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.*", "oc create configmap cluster-autoscaler-priority-expander --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml", "oc get configmaps cluster-autoscaler-priority-expander -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "oc get MachineAutoscaler -n openshift-machine-api", "NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m", "oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2", "oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api", "machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted", "oc get MachineAutoscaler -n openshift-machine-api", "oc get ClusterAutoscaler", "NAME AGE default 42m", "oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2", "oc delete ClusterAutoscaler/default", "clusterautoscaler.autoscaling.openshift.io \"default\" deleted", "oc get ClusterAutoscaler", "No resources found", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: 11 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 12 userDataSecret: name: <user_data_secret> 13 vcpuSockets: 4 14 vcpusPerSocket: 1 15 taints: 16 - key: node-role.kubernetes.io/infra effect: NoSchedule", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Exists 6 value: reserved 7", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.31.3", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none>", "NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.31.3 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.31.3 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.31.3 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.31.3 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.31.3 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.31.3", "oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: \"37952\" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"value\" effect: \"NoSchedule\"", "oc get pods -n clusterresourceoverride-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none>", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "aws cloudformation describe-stacks --stack-name <name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m", "No resources found in openshift-machine-api namespace.", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc create -f <control_plane_machine_set>.yaml", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "openstack compute service set <target_node_host_name> nova-compute --disable", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc delete machine -n openshift-machine-api <control_plane_machine_name> 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: \"\" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"1\" 11", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: \"1\" 1 - zone: \"2\" - zone: \"3\" platform: Azure 2", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1", "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2", "providerSpec: value: flavor: m1.xlarge 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name_1> - name: <failure_domain_name_2>", "oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}", "https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions", "urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc edit machine <control_plane_machine_name>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide", "oc edit machine <control_plane_machine_name>", "oc edit machine/<cluster_id>-master-0 -n openshift-machine-api", "providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data", "oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit machine/<cluster_id>-master-1 -n openshift-machine-api", "providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.18 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data", "oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api", "oc create -f <cluster_resource_file>.yaml", "oc get cluster", "NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3", "oc create -f <machine_template_resource_file>.yaml", "oc get <machine_template_kind>", "NAME AGE <template_name> 77m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3", "oc create -f <machine_set_resource_file>.yaml", "oc get machineset -n openshift-cluster-api 1", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m", "oc get machine -n openshift-cluster-api 1", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s", "oc get node", "NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5", "oc get <machine_template_kind> 1", "NAME AGE <template_name> 77m", "oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml", "oc apply -f <modified_template_name>.yaml 1", "oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m", "oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h", "oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s", "oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 2 host: <control_plane_endpoint_address> port: 6443 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 3 name: <cluster_name> namespace: openshift-cluster-api", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large ignition: storageType: UnencryptedUserData version: \"3.2\" ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 disableExtensionOperations: true identity: UserAssigned image: id: /subscriptions/<subscription_id>/resourceGroups/<cluster_name>-rg/providers/Microsoft.Compute/galleries/gallery_<compliant_cluster_name>/images/<cluster_name>-gen2/versions/latest 4 networkInterfaces: - acceleratedNetworking: true privateIPConfigs: 1 subnetName: <cluster_name>-worker-subnet osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux sshPublicKey: <ssh_key_value> userAssignedIdentities: - providerID: 'azure:///subscriptions/<subscription_id>/resourcegroups/<cluster_name>-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<cluster_name>-identity' vmSize: Standard_D4s_v3", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureMachineTemplate 3 name: <template_name> 4", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 template: <vm_template_name> 4 server: <vcenter_server_ip> 5 diskGiB: 128 cloneMode: linkedClone 6 datacenter: <vcenter_data_center_name> 7 datastore: <vcenter_datastore_name> 8 folder: <vcenter_vm_folder_path> 9 resourcePool: <vsphere_resource_pool> 10 numCPUs: 4 memoryMiB: 16384 network: devices: - dhcp4: true networkName: \"<vm_network_name>\" 11", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> 2 spec: clusterName: <cluster_name> 3 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 4 name: <template_name> 5 failureDomain: 6 - name: <failure_domain_name> region: <region_a> zone: <zone_a> server: <vcenter_server_name> topology: datacenter: <region_a_data_center> computeCluster: \"</region_a_data_center/host/zone_a_cluster>\" resourcePool: \"</region_a_data_center/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_data_center/datastore/datastore_a>\" networks: - port-group", "oc delete machine.machine.openshift.io <machine_name>", "oc delete machine.cluster.x-k8s.io <machine_name>", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: \"Ready\" timeout: \"300s\"", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/machine_management/index
High Availability Deployment and Usage
High Availability Deployment and Usage Red Hat OpenStack Platform 16.2 Planning, deploying, and managing high availability in Red Hat OpenStack Platform OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_deployment_and_usage/index
probe::kprocess.create
probe::kprocess.create Name probe::kprocess.create - Fires whenever a new process is successfully created Synopsis Values new_pid The PID of the newly created process Context Parent of the created process. Description Fires whenever a new process is successfully created, either as a result of fork (or one of its syscall variants), or a new kernel thread.
[ "kprocess.create" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kprocess-create
A.11. VDSM Hook Examples
A.11. VDSM Hook Examples The example hook scripts provided in this section are strictly not supported by Red Hat. You must ensure that any and all hook scripts that you install to your system, regardless of source, are thoroughly tested for your environment. Example A.5. NUMA Node Tuning Purpose: This hook script allows for tuning the allocation of memory on a NUMA host based on the numaset custom property. Where the custom property is not set no action is taken. Configuration String: The regular expression used allows the numaset custom property for a given virtual machine to specify both the allocation mode ( interleave , strict , preferred ) and the node to use. The two values are separated by a colon ( : ). The regular expression allows specification of the nodeset as: that a specific node ( numaset=strict:1 , specifies that only node 1 be used), or that a range of nodes be used ( numaset=strict:1-4 , specifies that nodes 1 through 4 be used), or that a specific node not be used ( numaset=strict:^3 , specifies that node 3 not be used), or any comma-separated combination of the above ( numaset=strict:1-4,6 , specifies that nodes 1 to 4, and 6 be used). Script: /usr/libexec/vdsm/hooks/before_vm_start/50_numa
[ "numaset=^(interleave|strict|preferred):[\\^]?\\d+(-\\d+)?(,[\\^]?\\d+(-\\d+)?)*USD", "#!/usr/bin/python import os import sys import hooking import traceback ''' numa hook ========= add numa support for domain xml: <numatune> <memory mode=\"strict\" nodeset=\"1-4,^3\" /> </numatune> memory=interleave|strict|preferred numaset=\"1\" (use one NUMA node) numaset=\"1-4\" (use 1-4 NUMA nodes) numaset=\"^3\" (don't use NUMA node 3) numaset=\"1-4,^3,6\" (or combinations) syntax: numa=strict:1-4 ''' if os.environ.has_key('numa'): try: mode, nodeset = os.environ['numa'].split(':') domxml = hooking.read_domxml() domain = domxml.getElementsByTagName('domain')[0] numas = domxml.getElementsByTagName('numatune') if not len(numas) > 0: numatune = domxml.createElement('numatune') domain.appendChild(numatune) memory = domxml.createElement('memory') memory.setAttribute('mode', mode) memory.setAttribute('nodeset', nodeset) numatune.appendChild(memory) hooking.write_domxml(domxml) else: sys.stderr.write('numa: numa already exists in domain xml') sys.exit(2) except: sys.stderr.write('numa: [unexpected error]: %s\\n' % traceback.format_exc()) sys.exit(2)" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/vdsm_hooks_examples
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of AMQ .NET through example programs. For more examples, see the AMQ .NET example suite and the AMQP.Net Lite examples . 4.1. Sending messages This client program connects to a server using <connection-url> , creates a sender for target <address> , sends a message containing <message-body> , closes the connection, and exits. Example: Sending messages namespace SimpleSend { using System; using Amqp; 1 class SimpleSend { static void Main(string[] args) { string url = (args.Length > 0) ? args[0] : 2 "amqp://guest:[email protected]:5672"; string target = (args.Length > 1) ? args[1] : "examples"; 3 int count = (args.Length > 2) ? Convert.ToInt32(args[2]) : 10; 4 Address peerAddr = new Address(url); 5 Connection connection = new Connection(peerAddr); 6 Session session = new Session(connection); SenderLink sender = new SenderLink(session, "send-1", target); 7 for (int i = 0; i < count; i++) { Message msg = new Message("simple " + i); 8 sender.Send(msg); 9 Console.WriteLine("Sent: " + msg.Body.ToString()); } sender.Close(); 10 session.Close(); connection.Close(); } } } 1 using Amqp; Imports types defined in the Amqp namespace. Amqp is defined by a project reference to library file Amqp.Net.dll and provides all the classes, interfaces, and value types associated with AMQ .NET. 2 Command line arg[0] url is the network address of the host or virtual host for the AMQP connection. This string describes the connection transport, the user and password credentials, and the port number for the connection on the remote host. url may address a broker, a standalone peer, or an ingress point for a router network. 3 Command line arg[1] target is the name of the message destination endpoint or resource in the remote host. 4 Command line arg[2] count is the number of messages to send. 5 peerAddr is a structure required for creating an AMQP connection. 6 Create the AMQP connection. 7 sender is a client SenderLink over which messages may be sent. The link is arbitrarily named send-1 . Use link names that make sense in your environment and will help to identify traffic in a busy system. Link names are not restricted but must be unique within the same session. 8 In the message send loop a new message is created. 9 The message is sent to the AMQP peer. 10 After all messages are sent then the protocol objects are shut down in an orderly fashion. Running the example To run the example program, compile it and execute it from the command line. For more information, see Chapter 3, Getting started . <install-dir> \bin\Debug>simple_send "amqp://guest:guest@localhost" service_queue 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages namespace SimpleRecv { using System; using Amqp; 1 class SimpleRecv { static void Main(string[] args) { string url = (args.Length > 0) ? args[0] : 2 "amqp://guest:[email protected]:5672"; string source = (args.Length > 1) ? args[1] : "examples"; 3 int count = (args.Length > 2) ? Convert.ToInt32(args[2]) : 10; 4 Address peerAddr = new Address(url); 5 Connection connection = new Connection(peerAddr); 6 Session session = new Session(connection); ReceiverLink receiver = new ReceiverLink(session, "recv-1", source); 7 for (int i = 0; i < count; i++) { Message msg = receiver.Receive(); 8 receiver.Accept(msg); 9 Console.WriteLine("Received: " + msg.Body.ToString()); } receiver.Close(); 10 session.Close(); connection.Close(); } } } 1 using Amqp; Imports types defined in the Amqp namespace. Amqp is defined by a project reference to library file Amqp.Net.dll and provides all the classes, interfaces, and value types associated with AMQ .NET. 2 Command line arg[0] url is the network address of the host or virtual host for the AMQP connection. This string describes the connection transport, the user and password credentials, and the port number for the connection on the remote host. url may address a broker, a standalone peer, or an ingress point for a router network. 3 Command line arg[1] source is the name of the message source endpoint or resource in the remote host. 4 Command line arg[2] count is the number of messages to send. 5 peerAddr is a structure required for creating an AMQP connection. 6 Create the AMQP connection. 7 receiver is a client ReceiverLink over which messages may be received. The link is arbitrarily named recv-1 . Use link names that make sense in your environment and will help to identify traffic in a busy system. Link names are not restricted but must be unique within the same session. 8 A message is received. 9 The messages is accepted. This transfers ownership of the message from the peer to the receiver. 10 After all messages are received then the protocol objects are shut down in an orderly fashion. Running the example To run the example program, compile it and execute it from the command line. For more information, see Chapter 3, Getting started . <install-dir> \bin\Debug>simple_recv "amqp://guest:guest@localhost" service_queue
[ "namespace SimpleSend { using System; using Amqp; 1 class SimpleSend { static void Main(string[] args) { string url = (args.Length > 0) ? args[0] : 2 \"amqp://guest:[email protected]:5672\"; string target = (args.Length > 1) ? args[1] : \"examples\"; 3 int count = (args.Length > 2) ? Convert.ToInt32(args[2]) : 10; 4 Address peerAddr = new Address(url); 5 Connection connection = new Connection(peerAddr); 6 Session session = new Session(connection); SenderLink sender = new SenderLink(session, \"send-1\", target); 7 for (int i = 0; i < count; i++) { Message msg = new Message(\"simple \" + i); 8 sender.Send(msg); 9 Console.WriteLine(\"Sent: \" + msg.Body.ToString()); } sender.Close(); 10 session.Close(); connection.Close(); } } }", "<install-dir> \\bin\\Debug>simple_send \"amqp://guest:guest@localhost\" service_queue", "namespace SimpleRecv { using System; using Amqp; 1 class SimpleRecv { static void Main(string[] args) { string url = (args.Length > 0) ? args[0] : 2 \"amqp://guest:[email protected]:5672\"; string source = (args.Length > 1) ? args[1] : \"examples\"; 3 int count = (args.Length > 2) ? Convert.ToInt32(args[2]) : 10; 4 Address peerAddr = new Address(url); 5 Connection connection = new Connection(peerAddr); 6 Session session = new Session(connection); ReceiverLink receiver = new ReceiverLink(session, \"recv-1\", source); 7 for (int i = 0; i < count; i++) { Message msg = receiver.Receive(); 8 receiver.Accept(msg); 9 Console.WriteLine(\"Received: \" + msg.Body.ToString()); } receiver.Close(); 10 session.Close(); connection.Close(); } } }", "<install-dir> \\bin\\Debug>simple_recv \"amqp://guest:guest@localhost\" service_queue" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/examples
Chapter 16. Bean Validator
Chapter 16. Bean Validator Only producer is supported The Validator component performs bean validation of the message body using the Java Bean Validation API (). Camel uses the reference implementation, which is Hibernate Validator . 16.1. Dependencies When using bean-validator with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency> 16.2. URI format Where label is an arbitrary text value describing the endpoint. You can append query options to the URI in the following format, ?option=value&option=value&... 16.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 16.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 16.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 16.4. Component Options The Bean Validator component supports 8 options, which are listed below. Name Description Default Type ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) Autowired To use a custom ValidatorFactory. ValidatorFactory 16.5. Endpoint Options The Bean Validator endpoint is configured using URI syntax: with the following path and query parameters: 16.5.1. Path Parameters (1 parameters) Name Description Default Type label (producer) Required Where label is an arbitrary text value describing the endpoint. String 16.5.2. Query Parameters (8 parameters) Name Description Default Type group (producer) To use a custom validation group. javax.validation.groups.Default String ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) To use a custom ValidatorFactory. ValidatorFactory 16.6. OSGi deployment To use Hibernate Validator in the OSGi environment use dedicated ValidationProviderResolver implementation, just as org.apache.camel.component.bean.validator.HibernateValidationProviderResolver . The snippet below demonstrates this approach. You can also use HibernateValidationProviderResolver . Using HibernateValidationProviderResolver from("direct:test"). to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver"); <bean id="myValidationProviderResolver" class="org.apache.camel.component.bean.validator.HibernateValidationProviderResolver"/> If no custom ValidationProviderResolver is defined and the validator component has been deployed into the OSGi environment, the HibernateValidationProviderResolver will be automatically used. 16.7. Example Assumed we have a java bean with the following annotations Car.java public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter } and an interface definition for our custom validation group OptionalChecks.java public interface OptionalChecks { } with the following Camel route, only the @NotNull constraints on the attributes manufacturer and licensePlate will be validated (Camel uses the default group javax.validation.groups.Default ). from("direct:start") .to("bean-validator://x") .to("mock:end") If you want to check the constraints from the group OptionalChecks , you have to define the route like this from("direct:start") .to("bean-validator://x?group=OptionalChecks") .to("mock:end") If you want to check the constraints from both groups, you have to define a new interface first AllChecks.java @GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { } and then your route definition should looks like this from("direct:start") .to("bean-validator://x?group=AllChecks") .to("mock:end") And if you have to provide your own message interpolator, traversable resolver and constraint validator factory, you have to write a route like this <bean id="myMessageInterpolator" class="my.ConstraintValidatorFactory" /> <bean id="myTraversableResolver" class="my.TraversableResolver" /> <bean id="myConstraintValidatorFactory" class="my.ConstraintValidatorFactory" /> from("direct:start") .to("bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory") .to("mock:end") It's also possible to describe your constraints as XML and not as Java annotations. In this case, you have to provide the file META-INF/validation.xml which could looks like this validation.xml <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config> and the constraints-car.xml file constraints-car.xml <constraint-mappings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd" xmlns="http://jboss.org/xml/ns/javax/validation/mapping"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class="CarWithoutAnnotations" ignore-annotations="true"> <field name="manufacturer"> <constraint annotation="javax.validation.constraints.NotNull" /> </field> <field name="licensePlate"> <constraint annotation="javax.validation.constraints.NotNull" /> <constraint annotation="javax.validation.constraints.Size"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name="min">5</element> <element name="max">14</element> </constraint> </field> </bean> </constraint-mappings> Here is the XML syntax for the example route definition for OrderedChecks . Note that the body should include an instance of a class to validate. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks"/> </route> </camelContext> </beans> 16.8. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.bean-validator.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.bean-validator.constraint-validator-factory To use a custom ConstraintValidatorFactory. The option is a javax.validation.ConstraintValidatorFactory type. ConstraintValidatorFactory camel.component.bean-validator.enabled Whether to enable auto configuration of the bean-validator component. This is enabled by default. Boolean camel.component.bean-validator.ignore-xml-configuration Whether to ignore data from the META-INF/validation.xml file. false Boolean camel.component.bean-validator.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.bean-validator.message-interpolator To use a custom MessageInterpolator. The option is a javax.validation.MessageInterpolator type. MessageInterpolator camel.component.bean-validator.traversable-resolver To use a custom TraversableResolver. The option is a javax.validation.TraversableResolver type. TraversableResolver camel.component.bean-validator.validation-provider-resolver To use a a custom ValidationProviderResolver. The option is a javax.validation.ValidationProviderResolver type. ValidationProviderResolver camel.component.bean-validator.validator-factory To use a custom ValidatorFactory. The option is a javax.validation.ValidatorFactory type. ValidatorFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency>", "bean-validator:label[?options]", "bean-validator:label", "from(\"direct:test\"). to(\"bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver\");", "<bean id=\"myValidationProviderResolver\" class=\"org.apache.camel.component.bean.validator.HibernateValidationProviderResolver\"/>", "public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter }", "public interface OptionalChecks { }", "from(\"direct:start\") .to(\"bean-validator://x\") .to(\"mock:end\")", "from(\"direct:start\") .to(\"bean-validator://x?group=OptionalChecks\") .to(\"mock:end\")", "@GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { }", "from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks\") .to(\"mock:end\")", "<bean id=\"myMessageInterpolator\" class=\"my.ConstraintValidatorFactory\" /> <bean id=\"myTraversableResolver\" class=\"my.TraversableResolver\" /> <bean id=\"myConstraintValidatorFactory\" class=\"my.ConstraintValidatorFactory\" />", "from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory\") .to(\"mock:end\")", "<validation-config xmlns=\"http://jboss.org/xml/ns/javax/validation/configuration\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/configuration\"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config>", "<constraint-mappings xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd\" xmlns=\"http://jboss.org/xml/ns/javax/validation/mapping\"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class=\"CarWithoutAnnotations\" ignore-annotations=\"true\"> <field name=\"manufacturer\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> </field> <field name=\"licensePlate\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> <constraint annotation=\"javax.validation.constraints.Size\"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name=\"min\">5</element> <element name=\"max\">14</element> </constraint> </field> </bean> </constraint-mappings>", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks\"/> </route> </camelContext> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-bean-validator-component-starter
19.7. Directory Operations
19.7. Directory Operations In order to improve the performance of directory operations of Red Hat Gluster Storage volumes, the maximum metadata (stat, xattr) caching time on the client side is increased to 10 minutes, without compromising on the consistency of the cache. Significant performance improvements can be achieved in the following workloads by enabling metadata caching: Listing of directories (recursive) Creating files Deleting files Renaming files 19.7.1. Enabling Metadata Caching Enable metadata caching to improve the performance of directory operations. Execute the following commands from any one of the nodes on the trusted storage pool in the order mentioned below. Note If majority of the workload is modifying the same set of files and directories simultaneously from multiple clients, then enabling metadata caching might not provide the desired performance improvement. Execute the following command to enable metadata caching and cache invalidation: This is group set option which sets multiple volume options in a single command. To increase the number of files that can be cached, execute the following command: n , is set to 50000. It can be increased if the number of active files in the volume is very high. Increasing this number increases the memory footprint of the brick processes.
[ "gluster volume set < volname > group metadata-cache", "gluster volume set < VOLNAME > network.inode-lru-limit < n >" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Directory_Operations
Chapter 6. Using PXE to Provision Hosts
Chapter 6. Using PXE to Provision Hosts You can provision bare metal instances with Satellite using one of the following methods: Unattended Provisioning New hosts are identified by a MAC address and Satellite Server provisions the host using a PXE boot process. Unattended Provisioning with Discovery New hosts use PXE boot to load the Satellite Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Chapter 7, Configuring the Discovery Service . PXE-less Provisioning New hosts are provisioned with a boot disk image that Satellite Server generates. BIOS and UEFI Support With Red Hat Satellite, you can perform both BIOS and UEFI based PXE provisioning. Both BIOS and UEFI interfaces work as interpreters between the computer's operating system and firmware, initializing the hardware components and starting the operating system at boot time. For information about supported workflows, see Supported architectures and provisioning scenarios . In Satellite provisioning, the PXE loader option defines the DHCP filename option to use during provisioning. For BIOS systems, use the PXELinux BIOS option to enable a provisioned node to download the pxelinux.0 file over TFTP. For UEFI systems, use the PXEGrub2 UEFI option to enable a TFTP client to download grub2/grubx64.efi file, or use the PXEGrub2 UEFI HTTP option to enable an UEFI HTTP client to download grubx64.efi from Capsule with the HTTP Boot feature. For BIOS provisioning, you must associate a PXELinux template with the operating system. For UEFI provisioning, you must associate a PXEGrub2 template with the operating system. If you associate both PXELinux and PXEGrub2 templates, Satellite can deploy configuration files for both on a TFTP server, so that you can switch between PXE loaders easily. 6.1. Prerequisites for Bare Metal Provisioning The requirements for bare metal provisioning include: A Capsule Server managing the network for bare metal hosts. For unattended provisioning and discovery-based provisioning, Satellite Server requires PXE server settings. For more information about networking requirements, see Chapter 3, Configuring Networking . For more information about the Discovery service, Chapter 7, Configuring the Discovery Service . A bare metal host or a blank VM. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. For information about the security token for unattended and PXE-less provisioning, see Section 6.2, "Configuring the Security Token Validity Duration" . 6.2. Configuring the Security Token Validity Duration When performing any kind of provisioning, as a security measure, Satellite automatically generates a unique token and adds this token to the kickstart URL in the PXE configuration file (PXELinux, Grub2). By default, the token is valid for 360 minutes. When you provision a host, ensure that you reboot the host within this time frame. If the token expires, it is no longer valid and you receive a 404 error and the operating system installer download fails. Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Provisioning tab. Find the Token duration option and click the edit icon and edit the duration, or enter 0 to disable token generation. If token generation is disabled, an attacker can spoof client IP address and download kickstart from Satellite Server, including the encrypted root password. 6.3. Creating Hosts with Unattended Provisioning Unattended provisioning is the simplest form of host provisioning. You enter the host details on Satellite Server and boot your host. Satellite Server automatically manages the PXE configuration, organizes networking services, and provides the operating system and configuration for the host. This method of provisioning hosts uses minimal interaction during the process. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address for the host. This ensures the identification of the host during the PXE boot process. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.11, "Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. For more information about network interfaces, see Adding network interfaces . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for PXE booting the bare metal host. If you start the physical host and set its boot mode to PXE, the host detects the DHCP service of Satellite Server's integrated Capsule, receives HTTP endpoint of the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure Create the host with the hammer host create command: Ensure the network interface options are set using the hammer host interface update command: 6.4. Creating Hosts with PXE-less Provisioning Some hardware does not provide a PXE boot interface. In Satellite, you can provision a host without PXE boot. This is also known as PXE-less provisioning and involves generating a boot ISO that hosts can use. Using this ISO, the host can connect to Satellite Server, boot the installation media, and install the operating system. Satellite also provides a PXE-less discovery service that operates without PXE-based services, such as DHCP and TFTP. For more information, see Section 7.7, "Implementing PXE-less Discovery" . Boot ISO Types There are the following types of boot ISOs: Full host image A boot ISO that contains the kernel and initial RAM disk image for the specific host. This image is useful if the host fails to chainload correctly. The provisioning template still downloads from Satellite Server. Subnet image A boot ISO that is not associated with a specific host. The ISO sends the host's MAC address to Capsule Server, which matches it against the host entry. The image does not store IP address details and requires access to a DHCP server on the network to bootstrap. This image is generic to all hosts with a provisioning NIC on the same subnet. The image is based on iPXE boot firmware, only a limited number of network cards is supported. Note The Full host image is based on SYSLINUX and Grub and works with most network cards. When using a Subnet image , see supported hardware on ipxe.org for a list of network card drivers expected to work with an iPXE-based boot disk. Full host image contains a provisioning token, therefore the generated image has limited lifespan. For more information about configuring security tokens, read Section 6.2, "Configuring the Security Token Validity Duration" . To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name that you want to become the provisioned system's host name. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address for the host. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. Click Resolve in Provisioning Templates to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.11, "Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. This creates a host entry and the host details page appears. Download the boot disk from Satellite Server. For Full host image , on the host details page, click the vertical elipsis and select Full host ' My_Host_Name ' image . For Subnet image , navigate to Infrastructure > Subnets , click the dropdown menu in the Actions column of the required subnet and select Subnet generic image . Write the ISO to a USB storage device using the dd utility or livecd-tools if required. When you start the host and boot from the ISO or the USB storage device, the host connects to Satellite Server and starts installing operating system from its kickstart tree. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure Create the host using the hammer host create command. Ensure that your network interface options are set using the hammer host interface update command. Download the boot disk from Satellite Server using the hammer bootdisk command: For Full host image : For Subnet image : This creates a boot ISO for your host to use. Write the ISO to a USB storage device using the dd utility or livecd-tools if required. When you start the physical host and boot from the ISO or the USB storage device, the host connects to Satellite Server and starts installing operating system from its kickstart tree. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. 6.5. Creating Hosts with UEFI HTTP Boot Provisioning You can provision hosts from Satellite using the UEFI HTTP Boot. This is the only method with which you can provision hosts in IPv6 network. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you meet the requirements for HTTP booting. For more information, see HTTP Booting Requirements in Planning for Satellite . Procedure On Capsule that you use for provisioning, update the grub2-efi package to the latest version: Enable foreman-proxy-http , foreman-proxy-httpboot , and foreman-proxy-tftp features. In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address of the host's provisioning interface. This ensures the identification of the host during the PXE boot process. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. From the PXE Loader list, select Grub2 UEFI HTTP . Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.13, "Creating Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. For more information about network interfaces, see Adding network interfaces . Set the host to boot in UEFI mode from network. Start the host. From the boot menu, select Kickstart default PXEGrub2 . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Capsule with the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure On Capsule that you use for provisioning, update the grub2-efi package to the latest version: Enable foreman-proxy-http , foreman-proxy-httpboot , and foreman-proxy-tftp true features: Create the host with the hammer host create command. Ensure the network interface options are set using the hammer host interface update command: Set the host to boot in UEFI mode from network. Start the host. From the boot menu, select Kickstart default PXEGrub2 . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Capsule with the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. 6.6. Deploying SSH Keys During Provisioning Use this procedure to deploy SSH keys added to a user during provisioning. For information on adding SSH keys to a user, see Managing SSH Keys for a User in the Administering Red Hat Satellite guide. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates . Create a provisioning template, or clone and edit an existing template. For more information, see Section 2.13, "Creating Provisioning Templates" . In the template, click the Template tab. In the Template editor field, add the create_users snippet to the %post section: Select the Default checkbox. Click the Association tab. From the Application Operating Systems list, select an operating system. Click Submit to save the provisioning template. Create a host that is associated with the provisioning template or rebuild a host using the OS associated with the modified template. For more information, see Creating a Host in the Managing Hosts guide. The SSH keys of the Owned by user are added automatically when the create_users snippet is executed during the provisioning process. You can set Owned by to an individual user or a user group. If you set Owned by to a user group, the SSH keys of all users in the user group are added automatically.
[ "hammer host create --name \" My_Unattended_Host \" --organization \" My_Organization \" --location \" My_Location \" --hostgroup \" My_Host_Group \" --mac \" aa:aa:aa:aa:aa:aa \" --build true --enabled true --managed true", "hammer host interface update --host \"test1\" --managed true --primary true --provision true", "hammer host create --name \" My_Host_Name \" --organization \" My_Organization \" --location \" My_Location \" --hostgroup \" My_Host_Group \" --mac \" aa:aa:aa:aa:aa:aa \" --build true --enabled true --managed true", "hammer host interface update --host \" My_Host_Name \" --managed true --primary true --provision true", "hammer bootdisk host --host My_Host_Name.example.com --full true", "hammer bootdisk subnet --subnet My_Subnet_Name", "satellite-maintain packages update grub2-efi", "satellite-installer --scenario satellite --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "satellite-maintain packages update grub2-efi", "satellite-installer --scenario satellite --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "hammer host create --name \" My_Host \" --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --mac \" aa:aa:aa:aa:aa:aa \" --managed true --organization \" My_Organization \" --pxe-loader \"Grub2 UEFI HTTP\"", "hammer host interface update --host \" My_Host \" --managed true --primary true --provision true", "<%= snippet('create_users') %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/using_pxe_to_provision_hosts_provisioning
Chapter 6. Network Policy
Chapter 6. Network Policy As a user with the admin role, you can create a network policy for the netobserv namespace to secure inbound access to the Network Observability Operator. 6.1. Configuring an ingress network policy by using the FlowCollector custom resource You can configure the FlowCollector custom resource (CR) to deploy an ingress network policy for Network Observability by setting the spec.NetworkPolicy.enable specification to true . By default, the specification is false . If you have installed Loki, Kafka or any exporter in a different namespace that also has a network policy, you must ensure that the Network Observability components can communicate with them. Consider the following about your setup: Connection to Loki (as defined in the FlowCollector CR spec.loki parameter) Connection to Kafka (as defined in the FlowCollector CR spec.kafka parameter) Connection to any exporter (as defined in FlowCollector CR spec.exporters parameter) If you are using Loki and including it in the policy target, connection to an external object storage (as defined in your LokiStack related secret) Procedure . In the web console, go to Operators Installed Operators page. Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector CR. A sample configuration is as follows: Example FlowCollector CR for network policy apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: ["openshift-console", "openshift-monitoring"] 2 # ... 1 By default, the enable value is false . 2 Default values are ["openshift-console", "openshift-monitoring"] . 6.2. Creating a network policy for Network Observability If you want to further customize the network policies for the netobserv and netobserv-privileged namespaces, you must disable the managed installation of the policy from the FlowCollector CR, and create your own. You can use the network policy resources that are enabled from the FlowCollector CR as a starting point for the procedure that follows: Example netobserv network policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress Example netobserv-privileged network policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress Procedure Navigate to Networking NetworkPolicies . Select the netobserv project from the Project dropdown menu. Name the policy. For this example, the policy name is allow-ingress . Click Add ingress rule three times to create three ingress rules. Specify the following in the form: Make the following specifications for the first Ingress rule : From the Add allowed source dropdown menu, select Allow pods from the same namespace . Make the following specifications for the second Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-console . Make the following specifications for the third Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-monitoring . Verification Navigate to Observe Network Traffic . View the Traffic Flows tab, or any tab, to verify that the data is displayed. Navigate to Observe Dashboards . In the NetObserv/Health selection, verify that the flows are being ingested and sent to Loki, which is represented in the first graph. Additional resources Creating a network policy using the CLI
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/network-observability-network-policy
Operators
Operators OpenShift Container Platform 4.13 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml", "annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml", "catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json", "_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }", "#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }", "#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }", "#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }", "#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }", "#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }", "#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317", "name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm alpha generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"", "apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF", "bundle.core.rukpak.io/combo-tag-ref created", "oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'", "Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable", "manifests ├── namespace.yaml ├── cluster_role.yaml ├── role.yaml ├── serviceaccount.yaml ├── cluster_role_binding.yaml ├── role_binding.yaml └── deployment.yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "registry.redhat.io/redhat/redhat-operator-index:v4.12", "registry.redhat.io/redhat/redhat-operator-index:v4.13", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.26 priority: -400 publisher: Example Org", "quay.io/example-org/example-catalog:v1.26", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created", "packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1", "olm.skipRange: <semver_range>", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'", "properties: - type: olm.kubeversion value: version: \"1.16.0\"", "properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'", "type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue", "apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100", "dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"", "attenuated service account query failed - more than one operator group(s) are managing this namespace count=2", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]", "registry.redhat.io/redhat/redhat-operator-index:v4.8", "registry.redhat.io/redhat/redhat-operator-index:v4.9", "apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9", "oc create -f <file_name>.yaml", "/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/", "/apis/stable.example.com/v1/namespaces/*/crontabs/", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13", "oc create -f <file_name>.yaml", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "oc describe packagemanifests <operator_name> -n <catalog_namespace>", "oc describe packagemanifests quay-operator -n openshift-marketplace", "Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest Current CSV: quay-operator.v3.7.11 Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 Current CSV: quay-operator.v3.8.5 Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2", "oc apply -f sub.yaml", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "oc describe packagemanifests <operator_name> -n <catalog_namespace>", "oc describe packagemanifests quay-operator -n openshift-marketplace", "Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest Current CSV: quay-operator.v3.7.11 Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 Current CSV: quay-operator.v3.8.5 Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2", "oc apply -f sub.yaml", "apiVersion: v1 kind: Namespace metadata: name: team1-operator", "oc create -f team1-operator.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1", "oc create -f team1-operatorgroup.yaml", "apiVersion: v1 kind: Namespace metadata: name: global-operators", "oc create -f global-operators.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators", "oc create -f global-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "oc get csvs -n openshift", "oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF", "oc get events", "LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide", "oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2", "- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc edit operatorcondition <name>", "apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF", "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF", "kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]", "kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]", "apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23", "apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed", "mkdir <catalog_dir>", "opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry:v4.13 1", ". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3", "opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6", "opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2", "--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1", "opm validate <catalog_dir>", "echo USD?", "0", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman login <registry>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml", "--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---", "opm validate <catalog_dir>", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3", "podman login <registry>", "podman push <registry>/<namespace>/<index_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4", "opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.13 --tag mirror.example.com/abc/abc-redhat-operator-index:4.13.1 --pull-tool podman", "podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>", "oc get packagemanifests -n openshift-marketplace", "podman login <target_registry>", "podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.13", "Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.13 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051", "grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out", "{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.13 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.13 4", "podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.13", "opm migrate <registry_image> <fbc_directory>", "opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry:v4.13", "opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry:v4.13 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest", "apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "podman login <registry>:<port>", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }", "{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }", "oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m", "oc extract secret/pull-secret -n openshift-config --confirm", "cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson", "oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "oc get sa -n <tenant_namespace> 1", "NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1", "oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.13 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge", "grpcPodConfig: nodeSelector: custom_label: <label>", "grpcPodConfig: priorityClassName: <priority_class>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical", "grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator", "oc get platformoperator service-mesh-po -o yaml", "status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed", "oc get clusteroperator platform-operators-aggregated -o yaml", "status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available", "apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator", "oc apply -f service-mesh-po.yaml", "error: resource mapping not found for name: \"service-mesh-po\" namespace: \"\" from \"service-mesh-po.yaml\": no matches for kind \"PlatformOperator\" in version \"platform.openshift.io/v1alpha1\" ensure CRDs are installed first", "oc get platformoperator service-mesh-po -o yaml", "status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed", "oc get clusteroperator platform-operators-aggregated -o yaml", "status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available", "oc get platformoperator", "oc delete platformoperator quay-operator", "platformoperator.platform.openshift.io \"quay-operator\" deleted", "oc get ns quay-operator-system", "Error from server (NotFound): namespaces \"quay-operator-system\" not found", "oc get co platform-operators-aggregated", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.13.0-0 True False False 70s", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "tar xvf operator-sdk-v1.28.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.28.0-ocp\",", "tar xvf operator-sdk-v1.28.0-ocp-darwin-x86_64.tar.gz", "tar xvf operator-sdk-v1.28.0-ocp-darwin-aarch64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.28.0-ocp\",", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "export GO111MODULE=on", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})", "var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })", "operator-sdk edit --multigroup=true", "domain: example.com layout: go.kubebuilder.io/v3 multigroup: true", "operator-sdk create api --group=cache --version=v1 --kind=Memcached", "Create Resource [y/n] y Create Controller [y/n] y", "Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go", "// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }", "import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }", "// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil", "import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil", "// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }", "import ( \"github.com/operator-framework/operator-lib/proxy\" )", "for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "k8s.io/api v0.26.2 k8s.io/apiextensions-apiserver v0.26.2 k8s.io/apimachinery v0.26.2 k8s.io/cli-runtime v0.26.2 k8s.io/client-go v0.26.2 k8s.io/kubectl v0.26.2 sigs.k8s.io/controller-runtime v0.14.5 sigs.k8s.io/controller-tools v0.11.3 sigs.k8s.io/kubebuilder/v3 v3.9.1", "go mod tidy", "- build: generate fmt vet ## Build manager binary. + build: manifests generate fmt vet ## Build manager binary.", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211", "--- defaults file for Memcached size: 1", "apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3", "env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.13 1", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", ".PHONY: run ANSIBLE_ROLES_PATH?=\"USD(shell pwd)/roles\" run: ansible-operator ## Run against the configured Kubernetes cluster in ~/.kube/config USD(ANSIBLE_OPERATOR) run", "- name: kubernetes.core version: \"2.3.1\"", "- name: kubernetes.core version: \"2.4.0\"", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false", "- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False", "apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"", "{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }", "--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"", "sudo dnf install ansible", "pip3 install openshift", "ansible-galaxy collection install community.kubernetes", "ansible-galaxy collection install -r requirements.yml", "--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2", "--- state: present", "--- - hosts: localhost roles: - <kind>", "ansible-playbook playbook.yml", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "NAME DATA AGE example-config 0 2m1s", "ansible-playbook playbook.yml --extra-vars state=absent", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "make install", "/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "make run", "/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"", "oc apply -f config/samples/<gvk>.yaml", "oc get configmaps", "NAME STATUS AGE example-config Active 3s", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent", "oc apply -f config/samples/<gvk>.yaml", "oc get configmap", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2", "{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}", "containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"", "apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4", "status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running", "- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false", "- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data", "collections: - operator_sdk.util", "k8s_status: status: key1: value1", "mkdir nginx-operator", "cd nginx-operator", "operator-sdk init --plugins=helm", "operator-sdk create api --group demo --version v1 --kind Nginx", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system", "oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system", "make undeploy", "mkdir -p USDHOME/projects/nginx-operator", "cd USDHOME/projects/nginx-operator", "operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx", "operator-sdk init --plugins helm --help", "domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"", "Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080", "- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY", "proxy: http: \"\" https: \"\" no_proxy: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make install run", "{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project nginx-operator-system", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get nginx/nginx-sample -o yaml", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7", "oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m", "oc delete -f config/samples/demo_v1_nginx.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "FROM registry.redhat.io/openshift4/ose-helm-operator:v4.13 1", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2", "{{ .Values.replicaCount }}", "oc get Tomcats --all-namespaces", "mkdir -p USDHOME/github.com/example/memcached-operator", "cd USDHOME/github.com/example/memcached-operator", "operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached", "operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help", "Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch", "// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }", "operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3", "Create Resource [y/n] y Create Controller [y/n] y", "// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )", "// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }", "--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch", "make install run", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc project <project_name>-system", "apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m", "apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2", "oc apply -f config/samples/cache_v1_memcachedbackup.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m", "oc delete -f config/samples/cache_v1_memcached.yaml", "oc delete -f config/samples/cache_v1_memcachedbackup.yaml", "make undeploy", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "mkdir memcached-operator", "cd memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system", "oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system", "make undeploy", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator", "domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"", "operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4", "tree", ". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files", "public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }", "import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }", "@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}", "mvn clean install", "cat target/kubernetes/memcacheds.cache.example.com-v1.yaml", "Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1", "<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>", "package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }", "Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();", "if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }", "int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();", "if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }", "List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());", "if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }", "private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }", "private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }", "mvn clean install", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "oc apply -f rbac.yaml", "java -jar target/quarkus-app/quarkus-run.jar", "kubectl apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml", "customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc apply -f rbac.yaml", "oc get all -n default", "NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s", "oc apply -f memcached-sample.yaml", "memcached.cache.example.com/memcached-sample created", "oc get all", "NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "... containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.13 1 ...", "operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'", "operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'", "operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]' operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2", "// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{", "spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211", "- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2", "relatedImage: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3", "BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2", "make bundle USE_IMAGE_DIGESTS=true", "metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'", "labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2", "labels: operatorframework.io/os.linux: supported", "labels: operatorframework.io/arch.amd64: supported", "labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2", "metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1", "metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }", "module github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )", "import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5", "- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.", "required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.", "versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true", "customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster", "versions: - name: v1alpha1 served: false 1 storage: true", "versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2", "versions: - name: v1beta1 served: true storage: true", "metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>", "IMAGE_TAG_BASE=quay.io/example/my-operator", "make bundle-build bundle-push catalog-build catalog-push", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m", "oc get catalogsource", "NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1", "oc get og", "NAME AGE my-test 4h40m", "oc get csv", "NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m", "operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1", "INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"", "operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2", "INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"", "operator-sdk cleanup memcached-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1", "com.redhat.openshift.versions: \"v4.7-v4.9\" 1", "LABEL com.redhat.openshift.versions=\"<versions>\" 1", "spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"", "install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default", "spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.", "operator-sdk scorecard <bundle_dir_or_image> [flags]", "operator-sdk scorecard -h", "./bundle └── tests └── scorecard └── config.yaml", "kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.28.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.28.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test", "make bundle", "operator-sdk scorecard <bundle_dir_or_image>", "{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.28.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }", "-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.28.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'", "apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.28.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.28.0 labels: suite: olm test: olm-bundle-validation-test", "// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }", "operator-sdk bundle validate <bundle_dir_or_image> <flags>", "./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml", "INFO[0000] All validation tests have completed successfully", "ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV", "WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully", "operator-sdk bundle validate -h", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "operator-sdk bundle validate ./bundle", "operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description", "// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)", "operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)", "../prometheus", "package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }", "func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "oc apply -f config/prometheus/role.yaml", "oc apply -f config/prometheus/rolebinding.yaml", "oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"", "operator-sdk init --plugins=ansible --domain=testmetrics.com", "operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role", "--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2", "make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make install", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1", "oc create -f config/samples/metrics_v1_testmetrics.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m", "oc get ep", "NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m", "token=`oc create token prometheus-k8s -n openshift-monitoring`", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter", "HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge", "HELP my_gauge_metric Create my gauge and set it to 2.", "oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe", "HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary", "import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }", "import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }", "cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }", "err := cfg.Execute(ctx)", "packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml", "bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml", "operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3", "operator-sdk run bundle <bundle_image_name>:<tag>", "INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh", "oc -n [namespace] edit cm hw-event-proxy-operator-manager-config", "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org", "oc get clusteroperator authentication -o yaml", "oc -n openshift-monitoring edit cm cluster-monitoring-config", "oc edit etcd cluster", "oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml", "oc get deployment -n openshift-ingress", "oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'", "map[cidr:10.128.0.0/14 hostPrefix:23]", "oc edit kubeapiserver", "oc get clusteroperator openshift-controller-manager -o yaml", "oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/operators/index
Chapter 1. Using and configuring firewalld
Chapter 1. Using and configuring firewalld A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules . These rules are used to sort the incoming traffic and either block it or allow through. firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services, that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open. firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted, allow all traffic by default. Note that firewalld with nftables backend does not support passing custom nftables rules to firewalld , using the --direct option. 1.1. When to use firewalld, nftables, or iptables The following is a brief overview in which scenario you should use one of the following utilities: firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance-critical firewalls, such as for a whole network. iptables : The iptables utility on Red Hat Enterprise Linux uses the nf_tables kernel API instead of the legacy back end. The nf_tables API provides backward compatibility so that scripts that use iptables commands still work on Red Hat Enterprise Linux. For new firewall scripts, Red Hat recommends to use nftables . Important To prevent the different firewall-related services ( firewalld , nftables , or iptables ) from influencing each other, run only one of them on a RHEL host, and disable the other services. 1.2. Firewall zones You can use the firewalld utility to separate networks into different zones according to the level of trust that you have with the interfaces and traffic within that network. A connection can only be part of one zone, but you can use that zone for many network connections. firewalld follows strict principles in regards to zones: Traffic ingresses only one zone. Traffic egresses only one zone. A zone defines a level of trust. Intrazone traffic (within the same zone) is allowed by default. Interzone traffic (from zone to zone) is denied by default. Principles 4 and 5 are a consequence of principle 3. Principle 4 is configurable through the zone option --remove-forward . Principle 5 is configurable by adding new policies. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with the following utilities: NetworkManager firewall-config utility firewall-cmd utility The RHEL web console The RHEL web console, firewall-config , and firewall-cmd can only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using the web console, firewall-cmd , or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The /usr/lib/firewalld/zones/ directory stores the predefined zones, and you can instantly apply them to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The default settings of the predefined zones are as follows: block Suitable for: Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Accepts: Only network connections initiated from within the system. dmz Suitable for: Computers in your DMZ that are publicly-accessible with limited access to your internal network. Accepts: Only selected incoming connections. drop Suitable for: Any incoming network packets are dropped without any notification. Accepts: Only outgoing network connections. external Suitable for: External networks with masquerading enabled, especially for routers. Situations when you do not trust the other computers on the network. Accepts: Only selected incoming connections. home Suitable for: Home environment where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. internal Suitable for: Internal networks where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. public Suitable for: Public areas where you do not trust other computers on the network. Accepts: Only selected incoming connections. trusted Accepts: All network connections. work Suitable for: Work environment where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is the public zone. You can change the default zone. Note Make network zone names self-explanatory to help users understand them quickly. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. Additional resources firewalld.zone(5) man page on your system 1.3. Firewall policies The firewall policies specify the desired security state of your network. They outline rules and actions to take for different types of traffic. Typically, the policies contain rules for the following types of traffic: Incoming traffic Outgoing traffic Forward traffic Specific services and applications Network address translations (NAT) Firewall policies use the concept of firewall zones. Each zone is associated with a specific set of firewall rules that determine the traffic allowed. Policies apply firewall rules in a stateful, unidirectional manner. This means you only consider one direction of the traffic. The traffic return path is implicitly allowed due to stateful filtering of firewalld . Policies are associated with an ingress zone and an egress zone. The ingress zone is where the traffic originated (received). The egress zone is where the traffic leaves (sent). The firewall rules defined in a policy can reference the firewall zones to apply consistent configurations across multiple network interfaces. 1.4. Firewall rules You can use the firewall rules to implement specific configurations for allowing or blocking network traffic. As a result, you can control the flow of network traffic to protect your system from security threats. Firewall rules typically define certain criteria based on various attributes. The attributes can be as: Source IP addresses Destination IP addresses Transfer Protocols (TCP, UDP, ... ) Ports Network interfaces The firewalld utility organizes the firewall rules into zones (such as public , internal , and others) and policies. Each zone has its own set of rules that determine the level of traffic freedom for network interfaces associated with a particular zone. 1.5. Zone configuration files A firewalld zone configuration file contains the information for a zone. These are the zone description, services, ports, protocols, icmp-blocks, masquerade, forward-ports and rich language rules in an XML file format. The file name has to be zone-name .xml where the length of zone-name is currently limited to 17 chars. The zone configuration files are located in the /usr/lib/firewalld/zones/ and /etc/firewalld/zones/ directories. The following example shows a configuration that allows one service ( SSH ) and one port range, for both the TCP and UDP protocols: <?xml version="1.0" encoding="utf-8"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name="ssh"/> <port protocol="udp" port="1025-65535"/> <port protocol="tcp" port="1025-65535"/> </zone> Additional resources firewalld.zone manual page 1.6. Predefined firewalld services The firewalld service is a predefined set of firewall rules that define access to a specific application or network service. Each service represents a combination of the following elements: Local port Network protocol Associated firewall rules Source ports and destinations Firewall helper modules that load automatically if a service is enabled A service simplifies packet filtering and saves you time because it achieves several tasks at once. For example, firewalld can perform the following tasks at once: Open a port Define network protocol Enable packet forwarding Service configuration options and generic file information are described in the firewalld.service(5) man page on your system. The services are specified by means of individual XML configuration files, which are named in the following format: service-name .xml . Protocol names are preferred over service or application names in firewalld . You can configure firewalld in the following ways: Use utilities: firewall-config - graphical utility firewall-cmd - command-line utility firewall-offline-cmd - command-line utility Edit the XML files in the /etc/firewalld/services/ directory. If you do not add or change the service, no corresponding XML file exists in /etc/firewalld/services/ . You can use the files in /usr/lib/firewalld/services/ as templates. Additional resources firewalld.service(5) man page on your system 1.7. Working with firewalld zones Zones represent a concept to manage incoming traffic more transparently. The zones are connected to networking interfaces or assigned a range of source addresses. You manage firewall rules for each zone independently, which enables you to define complex firewall settings and apply them to the traffic. 1.7.1. Customizing firewall settings for a specific zone to enhance security You can strengthen your network security by modifying the firewall settings and associating a specific network interface or connection with a particular firewall zone. By defining granular rules and restrictions for a zone, you can control inbound and outbound traffic based on your intended security levels. For example, you can achieve the following benefits: Protection of sensitive data Prevention of unauthorized access Mitigation of potential network threats Prerequisites The firewalld service is running. Procedure List the available firewall zones: The firewall-cmd --get-zones command displays all zones that are available on the system, but it does not show any details for particular zones. To see more detailed information for all zones, use the firewall-cmd --list-all-zones command. Choose the zone you want to use for this configuration. Modify firewall settings for the chosen zone. For example, to allow the SSH service and remove the ftp service: Assign a network interface to the firewall zone: List the available network interfaces: Activity of a zone is determined by the presence of network interfaces or source address ranges that match its configuration. The default zone is active for unclassified traffic but is not always active if no traffic matches its rules. Assign a network interface to the chosen zone: Assigning a network interface to a zone is more suitable for applying consistent firewall settings to all traffic on a particular interface (physical or virtual). The firewall-cmd command, when used with the --permanent option, often involves updating NetworkManager connection profiles to make changes to the firewall configuration permanent. This integration between firewalld and NetworkManager ensures consistent network and firewall settings. Verification Display the updated settings for your chosen zone: The command output displays all zone settings including the assigned services, network interface, and network connections (sources). 1.7.2. Changing the default zone System administrators assign a zone to a networking interface in its configuration files. If an interface is not assigned to a specific zone, it is assigned to the default zone. After each restart of the firewalld service, firewalld loads the settings for the default zone and makes it active. Note that settings for all other zones are preserved and ready to be used. Typically, zones are assigned to interfaces by NetworkManager according to the connection.zone setting in NetworkManager connection profiles. Also, after a reboot NetworkManager manages assignments for "activating" those zones. Prerequisites The firewalld service is running. Procedure To set up the default zone: Display the current default zone: Set the new default zone: Note Following this procedure, the setting is a permanent setting, even without the --permanent option. 1.7.3. Assigning a network interface to a zone It is possible to define different sets of rules for different zones and then change the settings quickly by changing the zone for the interface that is being used. With multiple interfaces, a specific zone can be set for each of them to distinguish traffic that is coming through them. Procedure To assign the zone to a specific interface: List the active zones and the interfaces assigned to them: Assign the interface to a different zone: 1.7.4. Assigning a zone to a connection using nmcli You can add a firewalld zone to a NetworkManager connection using the nmcli utility. Procedure Assign the zone to the NetworkManager connection profile: Activate the connection: 1.7.5. Manually assigning a zone to a network connection in a connection profile file If you cannot use the nmcli utility to modify a connection profile, you can manually edit the corresponding file of the profile to assign a firewalld zone. Note Modifying the connection profile with the nmcli utility to assign a firewalld zone is more efficient. For details, see Assigning a network interface to a zone . Procedure Determine the path to the connection profile and its format: NetworkManager uses separate directories and file names for the different connection profile formats: Profiles in /etc/NetworkManager/system-connections/ <connection_name> .nmconnection files use the keyfile format. Profiles in /etc/sysconfig/network-scripts/ifcfg- <interface_name> files use the ifcfg format. Depending on the format, update the corresponding file: If the file uses the keyfile format, append zone= <name> to the [connection] section of the /etc/NetworkManager/system-connections/ <connection_name> .nmconnection file: If the file uses the ifcfg format, append ZONE= <name> to the /etc/sysconfig/network-scripts/ifcfg- <interface_name> file: Reload the connection profiles: Reactivate the connection profiles Verification Display the zone of the interface, for example: 1.7.6. Creating a new zone To use custom zones, create a new zone and use it just like a predefined zone. New zones require the --permanent option, otherwise the command does not work. Prerequisites The firewalld service is running. Procedure Create a new zone: Make the new zone usable: The command applies recent changes to the firewall configuration without interrupting network services that are already running. Verification Check if the new zone is added to your permanent settings: 1.7.7. Enabling zones by using the web console You can apply predefined and existing firewall zones on a particular interface or a range of IP addresses through the RHEL web console. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. In the Firewall section, click Add new zone . In the Add zone dialog box, select a zone from the Trust level options. The web console displays all zones predefined in the firewalld service. In the Interfaces part, select an interface or interfaces on which the selected zone is applied. In the Allowed Addresses part, you can select whether the zone is applied on: the whole subnet or a range of IP addresses in the following format: 192.168.1.0 192.168.1.0/24 192.168.1.0/24, 192.168.1.0 Click on the Add zone button. Verification Check the configuration in the Firewall section: 1.7.8. Disabling zones by using the web console You can disable a firewall zone in your firewall configuration by using the web console. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. Click on the Options icon at the zone you want to remove. Click Delete . The zone is now disabled and the interface does not include opened services and ports which were configured in the zone. 1.7.9. Using zone targets to set default behavior for incoming traffic For every zone, you can set a default behavior that handles incoming traffic that is not further specified. Such behavior is defined by setting the target of the zone. There are four options: ACCEPT : Accepts all incoming packets except those disallowed by specific rules. REJECT : Rejects all incoming packets except those allowed by specific rules. When firewalld rejects packets, the source machine is informed about the rejection. DROP : Drops all incoming packets except those allowed by specific rules. When firewalld drops packets, the source machine is not informed about the packet drop. default : Similar behavior as for REJECT , but with special meanings in certain scenarios. Prerequisites The firewalld service is running. Procedure To set a target for a zone: List the information for the specific zone to see the default target: Set a new target in the zone: Additional resources firewall-cmd(1) man page on your system 1.8. Controlling network traffic using firewalld The firewalld package installs a large number of predefined service files and you can add more or customize them. You can then use these service definitions to open or close ports for services without knowing the protocol and port numbers they use. 1.8.1. Controlling traffic with predefined services using the CLI The most straightforward method to control traffic is to add a predefined service to firewalld . This opens all necessary ports and modifies other settings according to the service definition file . Prerequisites The firewalld service is running. Procedure Check that the service in firewalld is not already allowed: The command lists the services that are enabled in the default zone. List all predefined services in firewalld : The command displays a list of available services for the default zone. Add the service to the list of services that firewalld allows: The command adds the specified service to the default zone. Make the new settings persistent: The command applies these runtime changes to the permanent configuration of the firewall. By default, it applies these changes to the configuration of the default zone. Verification List all permanent firewall rules: The command displays complete configuration with the permanent firewall rules of the default firewall zone ( public ). Check the validity of the permanent configuration of the firewalld service. If the permanent configuration is invalid, the command returns an error with further details: You can also manually inspect the permanent configuration files to verify the settings. The main configuration file is /etc/firewalld/firewalld.conf . The zone-specific configuration files are in the /etc/firewalld/zones/ directory and the policies are in the /etc/firewalld/policies/ directory. 1.8.2. Controlling traffic with predefined services using the GUI You can control the network traffic with predefined services using a graphical user interface. The Firewall Configuration application provides an accessible and user-friendly alternative to the command-line utilities. Prerequisites You installed the firewall-config package. The firewalld service is running. Procedure To enable or disable a predefined or custom service: Start the firewall-config utility and select the network zone whose services are to be configured. Select the Zones tab and then the Services tab below. Select the checkbox for each type of service you want to trust or clear the checkbox to block a service in the selected zone. To edit a service: Start the firewall-config utility. Select Permanent from the menu labeled Configuration . Additional icons and menu buttons appear at the bottom of the Services window. Select the service you want to configure. The Ports , Protocols , and Source Port tabs enable adding, changing, and removing of ports, protocols, and source port for the selected service. The modules tab is for configuring Netfilter helper modules. The Destination tab enables limiting traffic to a particular destination address and Internet Protocol ( IPv4 or IPv6 ). Note It is not possible to alter service settings in the Runtime mode. Verification Press the Super key to enter the Activities overview. Select the Firewall Configuration utility. You can also start the graphical firewall configuration utility using the command-line, by entering the firewall-config command. View the list of configurations of your firewall: The Firewall Configuration window opens. Note that this command can be run as a normal user, but you are prompted for an administrator password occasionally. 1.8.3. Enabling services on the firewall by using the web console By default, services are added to the default firewall zone. If you use more firewall zones on more network interfaces, you must select a zone first and then add the service with port. The RHEL 9 web console displays predefined firewalld services and you can add them to active firewall zones. Important The RHEL 9 web console configures the firewalld service. The web console does not allow generic firewalld rules which are not listed in the web console. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. In the Firewall section, select a zone for which you want to add the service and click Add Services . In the Add Services dialog box, find the service you want to enable on the firewall. Enable services according to your scenario: Click Add Services . At this point, the RHEL 9 web console displays the service in the zone's list of Services . 1.8.4. Configuring custom ports by using the web console You can add configure custom ports for services through the RHEL web console. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . The firewalld service is running. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrative privileges. In the Firewall section, select a zone for which you want to configure a custom port and click Add Services . In the Add services dialog box, click on the Custom Ports radio button. In the TCP and UDP fields, add ports according to examples. You can add ports in the following formats: Port numbers such as 22 Range of port numbers such as 5900-5910 Aliases such as nfs, rsync Note You can add multiple values into each field. Values must be separated with the comma and without the space, for example: 8080,8081,http After adding the port number in the TCP filed, the UDP filed, or both, verify the service name in the Name field. The Name field displays the name of the service for which is this port reserved. You can rewrite the name if you are sure that this port is free to use and no server needs to communicate on this port. In the Name field, add a name for the service including defined ports. Click on the Add Ports button. To verify the settings, go to the Firewall page and find the service in the list of zone's Services . 1.8.5. Configuring firewalld to allow hosting a secure web server Ports are logical services that enable an operating system to receive and distinguish network traffic and forward it to system services. The system services are represented by a daemon that listens on the port and waits for any traffic coming to this port. Normally, system services listen on standard ports that are reserved for them. The httpd daemon, for example, listens on port 80. However, system administrators can directly specify the port number instead of the service name. You can use the firewalld service to configure access to a secure web server for hosting your data. Prerequisites The firewalld service is running. Procedure Check the currently active firewall zone: Add the HTTPS service to the appropriate zone: Reload the firewall configuration: Verification Check if the port is open in firewalld : If you opened the port by specifying the port number, enter: If you opened the port by specifying a service definition, enter: 1.8.6. Closing unused or unnecessary ports to enhance network security When an open port is no longer needed, you can use the firewalld utility to close it. Important Close all unnecessary ports to reduce the potential attack surface and minimize the risk of unauthorized access or exploitation of vulnerabilities. Procedure List all allowed ports: By default, this command lists the ports that are enabled in the default zone. Note This command will only give you a list of ports that are opened as ports. You will not be able to see any open ports that are opened as a service. For that case, consider using the --list-all option instead of --list-ports . Remove the port from the list of allowed ports to close it for the incoming traffic: This command removes a port from a zone. If you do not specify a zone, it will remove the port from the default zone. Make the new settings persistent: Without specifying a zone, this command applies runtime changes to the permanent configuration of the default zone. Verification List the active zones and choose the zone you want to inspect: List the currently open ports in the selected zone to check if the unused or unnecessary ports are closed: 1.8.7. Controlling traffic through the CLI You can use the firewall-cmd command to: disable networking traffic enable networking traffic As a result, you can for example enhance your system defenses, ensure data privacy or optimize network resources. Important Enabling panic mode stops all networking traffic. For this reason, it should be used only when you have the physical access to the machine or if you are logged in using a serial console. Procedure To immediately disable networking traffic, switch panic mode on: Switching off panic mode reverts the firewall to its permanent settings. To switch panic mode off, enter: Verification To see whether panic mode is switched on or off, use: 1.8.8. Controlling traffic with protocols using GUI To permit traffic through the firewall using a certain protocol, you can use the GUI. Prerequisites You installed the firewall-config package Procedure Start the firewall-config tool and select the network zone whose settings you want to change. Select the Protocols tab and click the Add button on the right-hand side. The Protocol window opens. Either select a protocol from the list or select the Other Protocol check box and enter the protocol in the field. 1.9. Using zones to manage incoming traffic depending on a source You can use zones to manage incoming traffic based on its source. Incoming traffic in this context is any data that is destined for your system, or passes through the host running firewalld . The source typically refers to the IP address or network range from which the traffic originates. As a result, you can sort incoming traffic and assign it to different zones to allow or disallow services that can be reached by that traffic. Matching by source address takes precedence over matching by interface name. When you add a source to a zone, the firewall will prioritize the source-based rules for incoming traffic over interface-based rules. This means that if incoming traffic matches a source address specified for a particular zone, the zone associated with that source address will determine how the traffic is handled, regardless of the interface through which it arrives. On the other hand, interface-based rules are generally a fallback for traffic that does not match specific source-based rules. These rules apply to traffic, for which the source is not explicitly associated with a zone. This allows you to define a default behavior for traffic that does not have a specific source-defined zone. 1.9.1. Adding a source To route incoming traffic into a specific zone, add the source to that zone. The source can be an IP address or an IP mask in the classless inter-domain routing (CIDR) notation. Note In case you add multiple zones with an overlapping network range, they are ordered alphanumerically by zone name and only the first one is considered. To set the source in the current zone: To set the source IP address for a specific zone: The following procedure allows all incoming traffic from 192.168.2.15 in the trusted zone: Procedure List all available zones: Add the source IP to the trusted zone in the permanent mode: Make the new settings persistent: 1.9.2. Removing a source When you remove a source from a zone, the traffic which originates from the source is no longer directed through the rules specified for that source. Instead, the traffic falls back to the rules and settings of the zone associated with the interface from which it originates, or goes to the default zone. Procedure List allowed sources for the required zone: Remove the source from the zone permanently: Make the new settings persistent: 1.9.3. Removing a source port By removing a source port you disable sorting the traffic based on a port of origin. Procedure To remove a source port: 1.9.4. Using zones and sources to allow a service for only a specific domain To allow traffic from a specific network to use a service on a machine, use zones and source. The following procedure allows only HTTP traffic from the 192.0.2.0/24 network while any other traffic is blocked. Warning When you configure this scenario, use a zone that has the default target. Using a zone that has the target set to ACCEPT is a security risk, because for traffic from 192.0.2.0/24 , all network connections would be accepted. Procedure List all available zones: Add the IP range to the internal zone to route the traffic originating from the source through the zone: Add the http service to the internal zone: Make the new settings persistent: Verification Check that the internal zone is active and that the service is allowed in it: Additional resources firewalld.zones(5) man page on your system 1.10. Filtering forwarded traffic between zones firewalld enables you to control the flow of network data between different firewalld zones. By defining rules and policies, you can manage how traffic is allowed or blocked when it moves between these zones. The policy objects feature provides forward and output filtering in firewalld . You can use firewalld to filter traffic between different zones to allow access to locally hosted VMs to connect the host. 1.10.1. The relationship between policy objects and zones Policy objects allow the user to attach firewalld's primitives such as services, ports, and rich rules to the policy. You can apply the policy objects to traffic that passes between zones in a stateful and unidirectional manner. HOST and ANY are the symbolic zones used in the ingress and egress zone lists. The HOST symbolic zone allows policies for the traffic originating from or has a destination to the host running firewalld. The ANY symbolic zone applies policy to all the current and future zones. ANY symbolic zone acts as a wildcard for all zones. 1.10.2. Using priorities to sort policies Multiple policies can apply to the same set of traffic, therefore, priorities should be used to create an order of precedence for the policies that may be applied. To set a priority to sort the policies: In the above example -500 is a lower priority value but has higher precedence. Thus, -500 will execute before -100. Lower numerical priority values have higher precedence and are applied first. 1.10.3. Using policy objects to filter traffic between locally hosted containers and a network physically connected to the host The policy objects feature allows users to filter traffic between Podman and firewalld zones. Note Red Hat recommends blocking all traffic by default and opening the selective services needed for the Podman utility. Procedure Create a new firewall policy: Block all traffic from Podman to other zones and allow only necessary services on Podman: Create a new Podman zone: Define the ingress zone for the policy: Define the egress zone for all other zones: Setting the egress zone to ANY means that you filter from Podman to other zones. If you want to filter to the host, then set the egress zone to HOST. Restart the firewalld service: Verification Verify the Podman firewall policy to other zones: 1.10.4. Setting the default target of policy objects You can specify --set-target options for policies. The following targets are available: ACCEPT - accepts the packet DROP - drops the unwanted packets REJECT - rejects unwanted packets with an ICMP reply CONTINUE (default) - packets will be subject to rules in following policies and zones. Verification Verify information about the policy 1.10.5. Using DNAT to forward HTTPS traffic to a different host If your web server runs in a DMZ with private IP addresses, you can configure destination network address translation (DNAT) to enable clients on the internet to connect to this web server. In this case, the host name of the web server resolves to the public IP address of the router. When a client establishes a connection to a defined port on the router, the router forwards the packets to the internal web server. Prerequisites The DNS server resolves the host name of the web server to the router's IP address. You know the following settings: The private IP address and port number that you want to forward The IP protocol to be used The destination IP address and port of the web server where you want to redirect the packets Procedure Create a firewall policy: The policies, as opposed to zones, allow packet filtering for input, output, and forwarded traffic. This is important, because forwarding traffic to endpoints on locally run web servers, containers, or virtual machines requires such capability. Configure symbolic zones for the ingress and egress traffic to also enable the router itself to connect to its local IP address and forward this traffic: The --add-ingress-zone=HOST option refers to packets generated locally and transmitted out of the local host. The --add-egress-zone=ANY option refers to traffic moving to any zone. Add a rich rule that forwards traffic to the web server: The rich rule forwards TCP traffic from port 443 on the IP address of the router (192.0.2.1) to port 443 of the IP address of the web server (192.51.100.20). Reload the firewall configuration files: Activate routing of 127.0.0.0/8 in the kernel: For persistent changes, run: The command persistently configures the route_localnet kernel parameter and ensures that the setting is preserved after the system reboots. For applying the settings immediately without a system reboot, run: The sysctl command is useful for applying on-the-fly changes, however the configuration will not persist across system reboots. Verification Connect to the IP address of the router and to the port that you have forwarded to the web server: Optional: Verify that the net.ipv4.conf.all.route_localnet kernel parameter is active: Verify that <example_policy> is active and contains the settings you need, especially the source IP address and port, protocol to be used, and the destination IP address and port: Additional resources firewall-cmd(1) , firewalld.policies(5) , firewalld.richlanguage(5) , sysctl(8) , and sysctl.conf(5) man pages on your system Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 1.11. Configuring NAT using firewalld With firewalld , you can configure the following network address translation (NAT) types: Masquerading Destination NAT (DNAT) Redirect 1.11.1. Network address translation types These are the different network address translation (NAT) types: Masquerading Use one of these NAT types to change the source IP address of packets. For example, Internet Service Providers (ISPs) do not route private IP ranges, such as 10.0.0.0/8 . If you use private IP ranges in your network and users should be able to reach servers on the internet, map the source IP address of packets from these ranges to a public IP address. Masquerading automatically uses the IP address of the outgoing interface. Therefore, use masquerading if the outgoing interface uses a dynamic IP address. Destination NAT (DNAT) Use this NAT type to rewrite the destination address and port of incoming packets. For example, if your web server uses an IP address from a private IP range and is, therefore, not directly accessible from the internet, you can set a DNAT rule on the router to redirect incoming traffic to this server. Redirect This type is a special case of DNAT that redirects packets to a different port on the local machine. For example, if a service runs on a different port than its standard port, you can redirect incoming traffic from the standard port to this specific port. 1.11.2. Configuring IP address masquerading You can enable IP masquerading on your system. IP masquerading hides individual machines behind a gateway when accessing the internet. Procedure To check if IP masquerading is enabled (for example, for the external zone), enter the following command as root : The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If zone is omitted, the default zone will be used. To enable IP masquerading, enter the following command as root : To make this setting persistent, pass the --permanent option to the command. To disable IP masquerading, enter the following command as root : To make this setting permanent, pass the --permanent option to the command. 1.11.3. Using DNAT to forward incoming HTTP traffic You can use destination network address translation (DNAT) to direct incoming traffic from one destination address and port to another. Typically, this is useful for redirecting incoming requests from an external network interface to specific internal servers or services. Prerequisites The firewalld service is running. Procedure Create the /etc/sysctl.d/90-enable-IP-forwarding.conf file with the following content: This setting enables IP forwarding in the kernel. It makes the internal RHEL server act as a router and forward packets from network to network. Load the setting from the /etc/sysctl.d/90-enable-IP-forwarding.conf file: Forward incoming HTTP traffic: The command defines a DNAT rule with the following settings: --zone=public - The firewall zone for which you configure the DNAT rule. You can adjust this to whatever zone you need. --add-forward-port - The option that indicates you are adding a port-forwarding rule. port=80 - The external destination port. proto=tcp - The protocol indicating that you forward TCP traffic. toaddr=198.51.100.10 - The destination IP address. toport=8080 - The destination port of the internal server. --permanent - The option that makes the DNAT rule persistent across reboots. Reload the firewall configuration to apply the changes: Verification Verify the DNAT rule for the firewall zone that you used: Alternatively, view the corresponding XML configuration file: Additional resources Configuring kernel parameters at runtime firewall-cmd(1) manual page 1.11.4. Redirecting traffic from a non-standard port to make the web service accessible on a standard port You can use the redirect mechanism to make the web service that internally runs on a non-standard port accessible without requiring users to specify the port in the URL. As a result, the URLs are simpler and provide better browsing experience, while a non-standard port is still used internally or for specific requirements. Prerequisites The firewalld service is running. Procedure Create the /etc/sysctl.d/90-enable-IP-forwarding.conf file with the following content: This setting enables IP forwarding in the kernel. Load the setting from the /etc/sysctl.d/90-enable-IP-forwarding.conf file: Create the NAT redirect rule: The command defines the NAT redirect rule with the following settings: --zone=public - The firewall zone, for which you configure the rule. You can adjust this to whatever zone you need. --add-forward-port=port= <non_standard_port> - The option that indicates you are adding a port-forwarding (redirecting) rule with source port on which you initially receive the incoming traffic. proto=tcp - The protocol indicating that you redirect TCP traffic. toport= <standard_port> - The destination port, to which the incoming traffic should be redirected after being received on the source port. --permanent - The option that makes the rule persist across reboots. Reload the firewall configuration to apply the changes: Verification Verify the redirect rule for the firewall zone that you used: Alternatively, view the corresponding XML configuration file: Additional resources Configuring kernel parameters at runtime firewall-cmd(1) manual page 1.12. Managing ICMP requests The Internet Control Message Protocol ( ICMP ) is a supporting protocol that is used by various network devices for testing, troubleshooting, and diagnostics. ICMP differs from transport protocols such as TCP and UDP because it is not used to exchange data between systems. You can use the ICMP messages, especially echo-request and echo-reply , to reveal information about a network and misuse such information for various kinds of fraudulent activities. Therefore, firewalld enables controlling the ICMP requests to protect your network information. 1.12.1. Configuring ICMP filtering You can use ICMP filtering to define which ICMP types and codes you want the firewall to permit or deny from reaching your system. ICMP types and codes are specific categories and subcategories of ICMP messages. ICMP filtering helps, for example, in the following areas: Security enhancement - Block potentially harmful ICMP types and codes to reduce your attack surface. Network performance - Permit only necessary ICMP types to optimize network performance and prevent potential network congestion caused by excessive ICMP traffic. Troubleshooting control - Maintain essential ICMP functionality for network troubleshooting and block ICMP types that represent potential security risk. Prerequisites The firewalld service is running. Procedure List available ICMP types and codes: From this predefined list, select which ICMP types and codes to allow or block. Filter specific ICMP types by: Allowing ICMP types: The command removes any existing blocking rules for the echo requests ICMP type. Blocking ICMP types: The command ensures that the redirect messages ICMP type is blocked by the firewall. Reload the firewall configuration to apply the changes: Verification Verify your filtering rules are in effect: The command output displays the ICMP types and codes that you allowed or blocked. Additional resources firewall-cmd(1) manual page 1.13. Setting and controlling IP sets using firewalld IP sets are a RHEL feature for grouping of IP addresses and networks into sets to achieve more flexible and efficient firewall rule management. The IP sets are valuable in scenarios when you need to for example: Handle large lists of IP addresses Implement dynamic updates to those large lists of IP addresses Create custom IP-based policies to enhance network security and control Warning Red Hat recommends using the firewall-cmd command to create and manage IP sets. 1.13.1. Configuring dynamic updates for allowlisting with IP sets You can make near real-time updates to flexibly allow specific IP addresses or ranges in the IP sets even in unpredictable conditions. These updates can be triggered by various events, such as detection of security threats or changes in the network behavior. Typically, such a solution leverages automation to reduce manual effort and improve security by responding quickly to the situation. Prerequisites The firewalld service is running. Procedure Create an IP set with a meaningful name: The new IP set called allowlist contains IP addresses that you want your firewall to allow. Add a dynamic update to the IP set: This configuration updates the allowlist IP set with a newly added IP address that is allowed to pass network traffic by your firewall. Create a firewall rule that references the previously created IP set: Without this rule, the IP set would not have any impact on network traffic. The default firewall policy would prevail. Reload the firewall configuration to apply the changes: Verification List all IP sets: List the active rules: The sources section of the command-line output provides insights to what origins of traffic (hostnames, interfaces, IP sets, subnets, and others) are permitted or denied access to a particular firewall zone. In this case, the IP addresses contained in the allowlist IP set are allowed to pass traffic through the firewall for the public zone. Explore the contents of your IP set: steps Use a script or a security utility to fetch your threat intelligence feeds and update allowlist accordingly in an automated fashion. Additional resources firewall-cmd(1) manual page 1.14. Prioritizing rich rules By default, rich rules are organized based on their rule action. For example, deny rules have precedence over allow rules. The priority parameter in rich rules provides administrators fine-grained control over rich rules and their execution order. When using the priority parameter, rules are sorted first by their priority values in ascending order. When more rules have the same priority , their order is determined by the rule action, and if the action is also the same, the order may be undefined. 1.14.1. How the priority parameter organizes rules into different chains You can set the priority parameter in a rich rule to any number between -32768 and 32767 , and lower numerical values have higher precedence. The firewalld service organizes rules based on their priority value into different chains: Priority lower than 0: the rule is redirected into a chain with the _pre suffix. Priority higher than 0: the rule is redirected into a chain with the _post suffix. Priority equals 0: based on the action, the rule is redirected into a chain with the _log , _deny , or _allow the action. Inside these sub-chains, firewalld sorts the rules based on their priority value. 1.14.2. Setting the priority of a rich rule The following is an example of how to create a rich rule that uses the priority parameter to log all traffic that is not allowed or denied by other rules. You can use this rule to flag unexpected traffic. Procedure Add a rich rule with a very low precedence to log all traffic that has not been matched by other rules: The command additionally limits the number of log entries to 5 per minute. Verification Display the nftables rule that the command in the step created: 1.15. Configuring firewall lockdown Local applications or services are able to change the firewall configuration if they are running as root (for example, libvirt ). With this feature, the administrator can lock the firewall configuration so that either no applications or only applications that are added to the lockdown allow list are able to request firewall changes. The lockdown settings default to disabled. If enabled, the user can be sure that there are no unwanted configuration changes made to the firewall by local applications or services. 1.15.1. Configuring lockdown using CLI You can enable or disable the lockdown feature using the command line. Procedure To query whether lockdown is enabled: Manage lockdown configuration by either: Enabling lockdown: Disabling lockdown: 1.15.2. Overview of lockdown allowlist configuration files The default allowlist configuration file contains the NetworkManager context and the default context of libvirt . The user ID 0 is also on the list. The allowlist configuration files are stored in the /etc/firewalld/ directory. <?xml version="1.0" encoding="utf-8"?> <whitelist> <command name="/usr/bin/python3 -s /usr/bin/firewall-config"/> <selinux context="system_u:system_r:NetworkManager_t:s0"/> <selinux context="system_u:system_r:virtd_t:s0-s0:c0.c1023"/> <user id="0"/> </whitelist> Following is an example allowlist configuration file enabling all commands for the firewall-cmd utility, for a user called user whose user ID is 815 : <?xml version="1.0" encoding="utf-8"?> <whitelist> <command name="/usr/libexec/platform-python -s /bin/firewall-cmd*"/> <selinux context="system_u:system_r:NetworkManager_t:s0"/> <user id="815"/> <user name="user"/> </whitelist> This example shows both user id and user name , but only one option is required. Python is the interpreter and is prepended to the command line. In Red Hat Enterprise Linux, all utilities are placed in the /usr/bin/ directory and the /bin/ directory is sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cmd when entered as root might resolve to /bin/firewall-cmd , /usr/bin/firewall-cmd can now be used. All new scripts should use the new location. But be aware that if scripts that run as root are written to use the /bin/firewall-cmd path, then that command path must be added in the allowlist in addition to the /usr/bin/firewall-cmd path traditionally used only for non- root users. The * at the end of the name attribute of a command means that all commands that start with this string match. If the * is not there then the absolute command including arguments must match. 1.16. Enabling traffic forwarding between different interfaces or sources within a firewalld zone Intra-zone forwarding is a firewalld feature that enables traffic forwarding between interfaces or sources within a firewalld zone. 1.16.1. The difference between intra-zone forwarding and zones with the default target set to ACCEPT With intra-zone forwarding enabled, the traffic within a single firewalld zone can flow from one interface or source to another interface or source. The zone specifies the trust level of interfaces and sources. If the trust level is the same, the traffic stays inside the same zone. Note Enabling intra-zone forwarding in the default zone of firewalld , applies only to the interfaces and sources added to the current default zone. firewalld uses different zones to manage incoming and outgoing traffic. Each zone has its own set of rules and behaviors. For example, the trusted zone, allows all forwarded traffic by default. Other zones can have different default behaviors. In standard zones, forwarded traffic is typically dropped by default when the target of the zone is set to default . To control how the traffic is forwarded between different interfaces or sources within a zone, make sure you understand and configure the target of the zone accordingly. 1.16.2. Using intra-zone forwarding to forward traffic between an Ethernet and Wi-Fi network You can use intra-zone forwarding to forward traffic between interfaces and sources within the same firewalld zone. This feature brings the following benefits: Seamless connectivity between wired and wireless devices (you can forward traffic between an Ethernet network connected to enp1s0 and a Wi-Fi network connected to wlp0s20 ) Support for flexible work environments Shared resources that are accessible and used by multiple devices or users within a network (such as printers, databases, network-attached storage, and others) Efficient internal networking (such as smooth communication, reduced latency, resource accessibility, and others) You can enable this functionality for individual firewalld zones. Procedure Enable packet forwarding in the kernel: Ensure that interfaces between which you want to enable intra-zone forwarding are assigned only to the internal zone: If the interface is currently assigned to a zone other than internal , reassign it: Add the enp1s0 and wlp0s20 interfaces to the internal zone: Enable intra-zone forwarding: Verification The following Verification require that the nmap-ncat package is installed on both hosts. Log in to a host that is on the same network as the enp1s0 interface of the host on which you enabled zone forwarding. Start an echo service with ncat to test connectivity: Log in to a host that is in the same network as the wlp0s20 interface. Connect to the echo server running on the host that is in the same network as the enp1s0 : Type something and press Enter . Verify the text is sent back. Additional resources firewalld.zones(5) man page on your system 1.17. Configuring firewalld by using RHEL system roles RHEL system roles is a set of contents for the Ansible automation utility. This content together with the Ansible automation utility provides a consistent configuration interface to remotely manage multiple systems at once. The rhel-system-roles package contains the rhel-system-roles.firewall RHEL system role. This role was introduced for automated configurations of the firewalld service. With the firewall RHEL system role you can configure many different firewalld parameters, for example: Zones The services for which packets should be allowed Granting, rejection, or dropping of traffic access to ports Forwarding of ports or port ranges for a zone 1.17.1. Resetting the firewalld settings by using the firewall RHEL system role Over time, updates to your firewall configuration can accumulate to the point, where they could lead to unintended security risks. With the firewall RHEL system role, you can reset the firewalld settings to their default state in an automated fashion. This way you can efficiently remove any unintentional or insecure firewall rules and simplify their management. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - : replaced The settings specified in the example playbook include the following: : replaced Removes all existing user-defined settings and resets the firewalld settings to defaults. If you combine the :replaced parameter with other settings, the firewall role removes all existing settings before applying new ones. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Run this command on the control node to remotely check that all firewall configuration on your managed node was reset to its default values: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 1.17.2. Forwarding incoming traffic in firewalld from one local port to a different local port by using the firewall RHEL system role You can use the firewall RHEL system role to remotely configure forwarding of incoming traffic from one local port to a different local port. For example, if you have an environment where multiple services co-exist on the same machine and need the same default port, there are likely to become port conflicts. These conflicts can disrupt services and cause a downtime. With the firewall RHEL system role, you can efficiently forward traffic to alternative ports to ensure that your services can run simultaneously without modification to their configuration. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true The settings specified in the example playbook include the following: forward_port: 8080/tcp;443 Traffic coming to the local port 8080 using the TCP protocol is forwarded to the port 443. runtime: true Enables changes in the runtime configuration. The default is set to true . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the forwarded-ports on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 1.17.3. Configuring a firewalld DMZ zone by using the firewall RHEL system role As a system administrator, you can use the firewall RHEL system role to configure a dmz zone on the enp1s0 interface to permit HTTPS traffic to the zone. In this way, you enable external users to access your web servers. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the information about the dmz zone on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory
[ "<?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name=\"ssh\"/> <port protocol=\"udp\" port=\"1025-65535\"/> <port protocol=\"tcp\" port=\"1025-65535\"/> </zone>", "firewall-cmd --get-zones", "firewall-cmd --add-service=ssh --zone= <your_chosen_zone> firewall-cmd --remove-service=ftp --zone= <same_chosen_zone>", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <your_chosen_zone> --change-interface=< interface_name > --permanent", "firewall-cmd --zone= <your_chosen_zone> --list-all", "firewall-cmd --get-default-zone", "firewall-cmd --set-default-zone <zone_name >", "firewall-cmd --get-active-zones", "firewall-cmd --zone= zone_name --change-interface= interface_name --permanent", "nmcli connection modify profile connection.zone zone_name", "nmcli connection up profile", "nmcli -f NAME,FILENAME connection NAME FILENAME enp1s0 /etc/NetworkManager/system-connections/enp1s0.nmconnection enp7s0 /etc/sysconfig/network-scripts/ifcfg-enp7s0", "[connection] zone=internal", "ZONE=internal", "nmcli connection reload", "nmcli connection up <profile_name>", "firewall-cmd --get-zone-of-interface enp1s0 internal", "firewall-cmd --permanent --new-zone= zone-name", "firewall-cmd --reload", "firewall-cmd --get-zones --permanent", "firewall-cmd --zone= zone-name --list-all", "firewall-cmd --permanent --zone=zone-name --set-target=<default|ACCEPT|REJECT|DROP>", "firewall-cmd --list-services ssh dhcpv6-client", "firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry", "firewall-cmd --add-service= <service_name>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all --permanent public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:", "firewall-cmd --check-config success", "firewall-cmd --check-config Error: INVALID_PROTOCOL: 'public.xml': 'tcpx' not from {'tcp'|'udp'|'sctp'|'dccp'}", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_name> --add-service=https --permanent", "firewall-cmd --reload", "firewall-cmd --zone= <zone_name> --list-all", "firewall-cmd --zone= <zone_name> --list-services", "firewall-cmd --list-ports", "firewall-cmd --remove-port=port-number/port-type", "firewall-cmd --runtime-to-permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_to_inspect> --list-ports", "firewall-cmd --panic-on", "firewall-cmd --panic-off", "firewall-cmd --query-panic", "firewall-cmd --add-source=<source>", "firewall-cmd --zone=zone-name --add-source=<source>", "firewall-cmd --get-zones", "firewall-cmd --zone=trusted --add-source=192.168.2.15", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --list-sources", "firewall-cmd --zone=zone-name --remove-source=<source>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>", "firewall-cmd --get-zones block dmz drop external home internal public trusted work", "firewall-cmd --zone=internal --add-source=192.0.2.0/24", "firewall-cmd --zone=internal --add-service=http", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: cockpit dhcpv6-client mdns samba-client ssh http", "firewall-cmd --permanent --new-policy myOutputPolicy firewall-cmd --permanent --policy myOutputPolicy --add-ingress-zone HOST firewall-cmd --permanent --policy myOutputPolicy --add-egress-zone ANY", "firewall-cmd --permanent --policy mypolicy --set-priority -500", "firewall-cmd --permanent --new-policy podmanToAny", "firewall-cmd --permanent --policy podmanToAny --set-target REJECT firewall-cmd --permanent --policy podmanToAny --add-service dhcp firewall-cmd --permanent --policy podmanToAny --add-service dns firewall-cmd --permanent --policy podmanToAny --add-service https", "firewall-cmd --permanent --new-zone=podman", "firewall-cmd --permanent --policy podmanToHost --add-ingress-zone podman", "firewall-cmd --permanent --policy podmanToHost --add-egress-zone ANY", "systemctl restart firewalld", "firewall-cmd --info-policy podmanToAny podmanToAny (active) target: REJECT ingress-zones: podman egress-zones: ANY services: dhcp dns https", "firewall-cmd --permanent --policy mypolicy --set-target CONTINUE", "firewall-cmd --info-policy mypolicy", "firewall-cmd --permanent --new-policy <example_policy>", "firewall-cmd --permanent --policy= <example_policy> --add-ingress-zone=HOST firewall-cmd --permanent --policy= <example_policy> --add-egress-zone=ANY", "firewall-cmd --permanent --policy= <example_policy> --add-rich-rule='rule family=\"ipv4\" destination address=\" 192.0.2.1 \" forward-port port=\" 443 \" protocol=\"tcp\" to-port=\" 443 \" to-addr=\" 192.51.100.20 \"'", "firewall-cmd --reload success", "echo \"net.ipv4.conf.all.route_localnet=1\" > /etc/sysctl.d/90-enable-route-localnet.conf", "sysctl -p /etc/sysctl.d/90-enable-route-localnet.conf", "curl https://192.0.2.1:443", "sysctl net.ipv4.conf.all.route_localnet net.ipv4.conf.all.route_localnet = 1", "firewall-cmd --info-policy= <example_policy> example_policy (active) priority: -1 target: CONTINUE ingress-zones: HOST egress-zones: ANY services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family=\"ipv4\" destination address=\"192.0.2.1\" forward-port port=\"443\" protocol=\"tcp\" to-port=\"443\" to-addr=\"192.51.100.20\"", "firewall-cmd --zone= external --query-masquerade", "firewall-cmd --zone= external --add-masquerade", "firewall-cmd --zone= external --remove-masquerade", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toaddr=198.51.100.10:toport=8080 --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports --zone=public port=80:proto=tcp:toport=8080:toaddr=198.51.100.10", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"80\" protocol=\"tcp\" to-port=\"8080\" to-addr=\"198.51.100.10\"/> <forward/> </zone>", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port= <standard_port> :proto=tcp:toport= <non_standard_port> --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports port=8080:proto=tcp:toport=80:toaddr=", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"8080\" protocol=\"tcp\" to-port=\"80\"/> <forward/> </zone>", "firewall-cmd --get-icmptypes address-unreachable bad-header beyond-scope communication-prohibited destination-unreachable echo-reply echo-request failed-policy fragmentation-needed host-precedence-violation host-prohibited host-redirect host-unknown host-unreachable", "firewall-cmd --zone= <target-zone> --remove-icmp-block= echo-request --permanent", "firewall-cmd --zone= <target-zone> --add-icmp-block= redirect --permanent", "firewall-cmd --reload", "firewall-cmd --list-icmp-blocks redirect", "firewall-cmd --permanent --new-ipset= allowlist --type=hash:ip", "firewall-cmd --permanent --ipset= allowlist --add-entry= 198.51.100.10", "firewall-cmd --permanent --zone=public --add-source=ipset: allowlist", "firewall-cmd --reload", "firewall-cmd --get-ipsets allowlist", "firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s1 sources: ipset:allowlist services: cockpit dhcpv6-client ssh ports: protocols:", "cat /etc/firewalld/ipsets/allowlist.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <ipset type=\"hash:ip\"> <entry>198.51.100.10</entry> </ipset>", "firewall-cmd --add-rich-rule='rule priority=32767 log prefix=\"UNEXPECTED: \" limit value=\"5/m\"'", "nft list chain inet firewalld filter_IN_public_post table inet firewalld { chain filter_IN_public_post { log prefix \"UNEXPECTED: \" limit rate 5/minute } }", "firewall-cmd --query-lockdown", "firewall-cmd --lockdown-on", "firewall-cmd --lockdown-off", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python3 -s /usr/bin/firewall-config\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/libexec/platform-python -s /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "firewall-cmd --get-active-zones", "firewall-cmd --zone=internal --change-interface= interface_name --permanent", "firewall-cmd --zone=internal --add-interface=enp1s0 --add-interface=wlp0s20", "firewall-cmd --zone=internal --add-forward", "ncat -e /usr/bin/cat -l 12345", "ncat <other_host> 12345", "--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_firewalls_and_packet_filters/using-and-configuring-firewalld_firewall-packet-filters
Chapter 13. Remotely accessing an X11-based application
Chapter 13. Remotely accessing an X11-based application You can remotely launch a graphical X11-based application on a RHEL server and use it from the remote client using X11 forwarding. Note This procedure works for legacy X11 applications, that is, applications that support the X11 display protocol. 13.1. Enabling X11 forwarding on the server Configure a RHEL server so that remote clients can use graphical applications on the server over SSH. Procedure Install basic X11 packages: Note Your applications might rely on additional graphical libraries. Enable the X11Forwarding option in the /etc/ssh/sshd_config configuration file: The option is disabled by default in RHEL. Restart the sshd service: 13.2. Launching an application remotely using X11 forwarding Access a graphical application on a RHEL server from a remote client using SSH. Prerequisites X11 forwarding over SSH is enabled on the server. For details, see Section 13.1, "Enabling X11 forwarding on the server" . Ensure that an X11 display server is running on your system: On RHEL, X11 is available by default in the graphical interface. On Microsoft Windows, install an X11 server such as Xming. On macOS, install the XQuartz X11 server. You have configured and restarted an OpenSSH server. For details, see Configuring and starting an OpenSSH server . Procedure Log in to the server using SSH: Confirm that a server key is valid by checking its fingerprint. Note If you plan to log in to the server on a regular basis, add the user's public key to the server using the ssh-copy-id command. Continue connecting by typing yes . When prompted, type the server password. Launch the application from the command line: Tip To skip the intermediate terminal session, use the following command: 13.3. Additional resources Remotely accessing an individual application on Wayland . Key differences between the Wayland and X11 protocol .
[ "dnf install xorg-x11-xauth xorg-x11-fonts-\\* xorg-x11-utils dbus-x11", "X11Forwarding yes", "systemctl restart sshd.service", "[local-user]USD ssh -X -Y remote-server The authenticity of host 'remote-server (192.168.122.120)' can't be established. ECDSA key fingerprint is SHA256: uYwFlgtP/2YABMHKv5BtN7nHK9SHRL4hdYxAPJVK/kY . Are you sure you want to continue connecting (yes/no/[fingerprint])?", "Warning: Permanently added ' remote-server ' (ECDSA) to the list of known hosts.", "local-user's password: [local-user ~]USD", "[remote-user]USD application-binary", "ssh user@server -X -Y -C binary_application" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/remotely-accessing-an-individual-application-x11_getting-started-with-the-gnome-desktop-environment
5.119. jakarta-commons-httpclient
5.119. jakarta-commons-httpclient 5.119.1. RHSA-2013:0270 - Moderate: jakarta-commons-httpclient security update Updated jakarta-commons-httpclient packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Jakarta Commons HttpClient component can be used to build HTTP-aware client applications (such as web browsers and web service clients). Security Fix CVE-2012-5783 The Jakarta Commons HttpClient component did not verify that the server hostname matched the domain name in the subject's Common Name (CN) or subjectAltName field in X.509 certificates. This could allow a man-in-the-middle attacker to spoof an SSL server if they had a certificate that was valid for any domain name. All users of jakarta-commons-httpclient are advised to upgrade to these updated packages, which correct this issue. Applications using the Jakarta Commons HttpClient component must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/jakarta-commons-httpclient
Chapter 1. Red Hat Enterprise Linux AI 1.4 release notes
Chapter 1. Red Hat Enterprise Linux AI 1.4 release notes RHEL AI provides organizations with a process to develop enterprise applications on open source Large Language Models (LLMs). 1.1. About this release Red Hat Enterprise Linux AI version 1.4 includes various features for Large Language Model (LLM) fine-tuning on the Red Hat and IBM produced Granite model. A customized model using the RHEL AI workflow consisted of the following: Install and launch a RHEL 9.4 instance with the InstructLab tooling. Host information in a Git repository and interact with a Git-based taxonomy of the knowledge you want a model to learn. Run the end-to-end workflow of synthetic data generation (SDG), multi-phase training, and benchmark evaluation. Serve and chat with the newly fine-tuned LLM. 1.2. Features and Enhancements Red Hat Enterprise Linux AI version 1.4 includes various features for Large Language Model (LLM) fine-tuning. 1.2.1. Installing Red Hat Enterprise Linux AI is installable as a bootable image. This image contains various tooling for interacting with RHEL AI. The image includes: Red Hat Enterprise Linux 9.4, Python version 3.11 and InstructLab tools for model fine-tuning. For more information about installing Red Hat Enterprise Linux AI, see Installation overview and the "Installation feature tracker" 1.2.2. Building your RHEL AI environment After installing Red Hat Enterprise Linux AI, you can set up your RHEL AI environment with the InstructLab tools. 1.2.2.1. Initializing InstructLab You can initialize and set up your RHEL AI environment by running the ilab config init command. This command creates the necessary configurations for interacting with RHEL AI and fine-tuning models. It also creates proper directories for your data files. For more information about initializing InstructLab, see the Initialize InstructLab documentation. 1.2.2.2. Downloading Large Language Models You can download various Large Language Models (LLMs) provided by Red Hat to your RHEL AI machine or instance. You can download these models from a Red Hat registry after creating and logging in to your Red Hat registry account. For more information about the supported RHEL AI LLMs, see the Downloading models documentation and the "Large Language Models (LLMs) technology preview status". 1.2.2.2.1. Uploading models to an S3 bucket Red Hat Enterprise Linux AI version 1.4 now allows you to upload models and checkpoints to an AWS S3 bucket. For more information on model uploading, see the Uploading your models to a registry 1.2.2.3. Serving and chatting with models Red Hat Enterprise Linux AI version 1.4 allows you to run a vLLM inference server on various LLMs. The vLLM tool is a memory-efficient inference and serving engine library for LLMs that is included in the RHEL AI image. For more information about serving and chatting with models, see Serving and chatting with the models documentation. 1.2.3. Creating skills and knowledge YAML files On Red Hat Enterprise Linux AI, you can customize your taxonomy tree using custom YAML files so a model can learn domain-specific information. You host your knowledge data in a Git repository and fine-tune a model with that data. For detailed documentation on how to create a knowledge markdown and YAML file, see Customizing your taxonomy tree . 1.2.4. Generating a custom LLM using RHEL AI You can use Red Hat Enterprise Linux AI to customize a granite starter LLM with your domain specific skills and knowledge. RHEL AI includes the LAB enhanced method of Synthetic Data Generation (SDG) and multi-phase training. 1.2.4.1. Synthetic Data Generation (SDG) Red Hat Enterprise Linux AI includes the LAB enhanced method of synthetic data generation (SDG). You can use the qna.yaml files with your own knowledge data to create hundreds of artifical datasets in the SDG process. For more information about running the SDG process, see Generating a new dataset with Synthetic data generation (SDG) . 1.2.4.1.1. Running Synthetic Data Generation (SDG) in the background RHEL AI version 1.4 introduces process management for SDG. This allows you to run SDG in the background of the same terminal you are using. You can interact with and attach to these processes while its running. 1.2.4.2. Training a model with your data Red Hat Enterprise Linux AI includes the LAB enhanced method of multi-phase training: A fine-tuning strategy where datasets are trained and evaluated in multiple phases to create the best possible model. For more details on multi-phase training, see Training your data on the model . 1.2.4.3. Benchmark evaluation Red Hat Enterprise Linux AI includes the ability to run benchmark evaluations on the newly trained models. On your trained model, you can evaluate how well the model knows the knowledge or skills you added with the MMLU_BRANCH or MT_BENCH_BRANCH benchmark. For more details on benchmark evaluation, see Evaluating your new model . 1.3. Red Hat Enterprise Linux AI feature tracker Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . In the following tables, features are marked with the following statuses: Not Available Technology Preview General Availability Deprecated Removed 1.3.1. Installation feature tracker Table 1.1. Installation features Feature 1.1 1.2 1.3 1.4 Installing on bare metal Generally available Generally available Generally available Generally available Installing on AWS Generally available Generally available Generally available Generally available Installing on IBM Cloud Generally available Generally available Generally available Generally available Installing on GCP Not available Technology preview Generally available Generally available Installing on Azure Not available Generally available Generally available Generally available 1.3.2. Platform support feature tracker Table 1.2. End-to-end InstructLab workflow Feature 1.1 1.2 1.3 1.4 Bare metal Generally available Generally available Generally available Generally available AWS Generally available Generally available Generally available Generally available IBM Cloud Not available Generally available Generally available Generally available Google Cloud Platform Not available Technology preview Generally available Generally available Azure Not available Generally available Generally available Generally available Table 1.3. Inference serving LLMs Feature 1.1 1.2 1.3 1.4 Bare metal Generally available Generally available Generally available Generally available AWS Generally available Generally available Generally available Generally available IBM Cloud Generally available Generally available Generally available Generally available Google Cloud Platform (GCP) Not available Technology preview Generally available Generally available Azure Not available Generally available Generally available Generally available Table 1.4. Cloud Marketplace support Feature 1.1 1.2 1.3 1.4 AWS Not available Not available Generally available Generally available Azure Not available Not available Generally available Generally available 1.4. Large Language Models feature status 1.4.1. RHEL AI version 1.4 hardware vendor LLM support Table 1.5. LLM support on hardware vendors Feature NVIDIA granite-7b-starter Deprecated granite-7b-redhat-lab Deprecated granite-8b-starter Generally available granite-8b-redhat-lab Generally available granite-3.1-8b-starter-v1 Generally available granite-3.1-8b-lab-v1 Generally available granite-8b-code-instruct Technology preview granite-8b-code-base Technology preview mixtral-8x7B-instruct-v0-1 Generally available prometheus-8x7b-v2.0 Generally available 1.5. Known Issues AMD-smi is not usable upon installation After installing Red Hat Enterprise Linux AI using the ISO image or upgrading to a system using the bootc-amd-rhel9 container, the amd-smi tool does not work by default. To enable amd-smi , add the proper ROCm version to your user PATH variable with the following command: USD export PATH="USDPATH:/opt/rocm-6.1.2/bin" Incorrect auto-detection on some NVIDIA A100 systems RHEL AI sometimes auto-detects the incorrect system profile on machines with A100 accelerators. You can select the correct profile by re-initializing and passing the correct system profile. USD ilab config init --profile <path-to-system-profile> Upgrading to a z-stream on AMD Bare metal and NVIDIA AWS systems On RHEL AI, there is an issue in the upgrade process if you are upgrading to a AMD bare metal or NVIDIA AWS system. To successfully update to a RHEL AI z-stream on these systems, run the following command. Bare metal with AMD accelerators USD sudo bootc switch registry.redhat.io/rhelai1/bootc-amd-rhel9:1.3 AWS with NVIDIA accelerators USD sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.3 Fabric manager does not always starts with NVIDIA accelerators After installing Red Hat Enterprise Linux AI on NVIDIA systems, you may see the following error when serving or training a model. INFO 2024-11-26 22:18:04,244 instructlab.model.serve_backend:56: Using model '/var/home/cloud-user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_29117' with -1 gpu-lay ers and 4096 max context size. INFO 2024-11-26 22:18:04,244 instructlab.model.serve_backend:88: '--gpus' flag used alongside '--tensor-parallel-size' in the vllm_args section of the config file. Using value of the --gpus File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 105, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/usr/lib64/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 192, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start To resolve this issue, you need the run the following commands: USD sudo systemctl stop nvidia-persistenced.service USD sudo systemctl start nvidia-fabricmanager.service USD sudo systemctl start nvidia-persistenced.service UI AMD technology preview installations Red Hat Enterprise Linux AI version 1.4 currently does not support graphical based installation with the technology previewed AMD ISOs. Ensure that the text parameter in your kickstart file is configured for non-interactive installs. You can also pass inst.text in your shell during interactive installation to avoid an install time crash. SDG can fail on 4xL40s For SDG to run on 4xL40s, you need to run SDG with the --num-cpus flag and set to the value of 4 . USD ilab data generate --num-cpus 4 MMLU and MMLU_BRANCH on the granite-8b-starter-v1 model When evaluating a model built from the granite-8b-starter-v1 LLM, there might an error where vLLM does not start when running the MMLU and MMLU_BRANCH benchmarks. If vLLM does not start, add the following parameter to the serve section of your config.yaml file: serve: vllm: vllm_args: [--dtype bfloat16] Kdump over nfs Red Hat Enterprise Linux AI version 1.4 does not support kdump over nfs without configuration. To use this feature, run the following commands: mkdir -p /var/lib/kdump/dracut.conf.d echo "dracutmodules=''" > /var/lib/kdump/dracut.conf.d/99-kdump.conf echo "omit_dracutmodules=''" >> /var/lib/kdump/dracut.conf.d/99-kdump.conf echo "dracut_args --confdir /var/lib/kdump/dracut.conf.d --install /usr/lib/passwd --install /usr/lib/group" >> /etc/kdump.conf systemctl restart kdump 1.6. Asynchronous z-stream updates Security, bug fix, and enhancement updates for RHEL AI 1.4 are released as asynchronous z-stream updates. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous z-stream releases of RHEL AI 1.4. Versioned asynchronous releases, for example with the form RHEL AI 1.4.z, will be detailed in subsections. 1.6.1. Red Hat Enterprise Linux AI 1.4.1 bug fixes Issued: 25 February 2025 Red Hat Enterprise Linux AI release 1.4.1 is now available. This release includes bug fixes and product enhancements. 1.6.1.1. Upgrade To update your RHEL AI system to the most recent z-stream version, you must be logged in to the Red Hat registry and run the following command: USD sudo bootc upgrade --apply For more information on upgrading your RHEL AI system, see the Updating Red Hat Enterprise Linux AI documentation. 1.6.2. Red Hat Enterprise Linux AI 1.4.2 features and bug fixes Issued: 4 March 2025 Red Hat Enterprise Linux AI release 1.4.2 is now available. This release includes bug fixes and product enhancements. 1.6.2.1. Features RHEL AI version 1.4.2, and further 1.4.z releases, now supports Intel Gaudi3 accelerators. You can download the Red Hat Enterprise Linux AI image on the Download Red Hat Enterprise Linux AI page and deploy RHEL AI on a machine with Gaudi3 accelerators. 1.6.2.2. Known Issues Inference fails on Intel Gaudi3 for multi-accelerators The 1.4.2 Intel Gaudi3 is missing a parameter in the InstructLab wrapper. This causes the inference to fail on machines with Gaudi3 accelerators. You can run the following procedure to resolve this issue. Copy the /usr/bin/ilab file to your home directoy and edit the ilab file. USD cp /usr/bin/ilab <path-to-home-directory> USD vim ~/ilab Your file will look like the following that now includes the `--env" "PT_HPU_ENABLE_LAZY_COLLECTIVES=true parameter. PODMAN_COMMAND=("podman" "run" "--rm" "-it" "--device" "/dev/infiniband" "--device" "/dev/accel" "--security-opt" "label=disable" "--net" "host" "--shm-size" "10G" "--pids-limit" "-1" "-v" "USDHOME:USDHOME" "USD{ADDITIONAL_MOUNT_OPTIONS[@]}" "--env" "HF_TOKEN" "--env" "HOME" "--env" "NCCL_DEBUG" "--env" "VLLM_LOGGING_LEVEL" "--env" "PT_HPU_ENABLE_LAZY_COLLECTIVES=true" "--entrypoint" "USDENTRYPOINT" "USD{IMAGE_NAME}") RHEL AI serving does not allow for more that 16 concurrent requests on Gaudi accelerators On RHEL AI version 1.4.2 for machines with Gaudi accelerators, you cannot run more that 16 concurrent requests when running the ilab model serve command. 1.6.2.3. Upgrade To update your RHEL AI system to the most recent z-stream version, you must be logged in to the Red Hat registry and run the following command: USD sudo bootc upgrade --apply For more information on upgrading your RHEL AI system, see the Updating Red Hat Enterprise Linux AI documentation.
[ "export PATH=\"USDPATH:/opt/rocm-6.1.2/bin\"", "ilab config init --profile <path-to-system-profile>", "sudo bootc switch registry.redhat.io/rhelai1/bootc-amd-rhel9:1.3", "sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.3", "INFO 2024-11-26 22:18:04,244 instructlab.model.serve_backend:56: Using model '/var/home/cloud-user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_29117' with -1 gpu-lay ers and 4096 max context size. INFO 2024-11-26 22:18:04,244 instructlab.model.serve_backend:88: '--gpus' flag used alongside '--tensor-parallel-size' in the vllm_args section of the config file. Using value of the --gpus File \"/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py\", line 105, in build_async_engine_client async with build_async_engine_client_from_engine_args( File \"/usr/lib64/python3.11/contextlib.py\", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File \"/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py\", line 192, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start", "sudo systemctl stop nvidia-persistenced.service sudo systemctl start nvidia-fabricmanager.service sudo systemctl start nvidia-persistenced.service", "ilab data generate --num-cpus 4", "serve: vllm: vllm_args: [--dtype bfloat16]", "mkdir -p /var/lib/kdump/dracut.conf.d echo \"dracutmodules=''\" > /var/lib/kdump/dracut.conf.d/99-kdump.conf echo \"omit_dracutmodules=''\" >> /var/lib/kdump/dracut.conf.d/99-kdump.conf echo \"dracut_args --confdir /var/lib/kdump/dracut.conf.d --install /usr/lib/passwd --install /usr/lib/group\" >> /etc/kdump.conf systemctl restart kdump", "sudo bootc upgrade --apply", "cp /usr/bin/ilab <path-to-home-directory> vim ~/ilab", "PODMAN_COMMAND=(\"podman\" \"run\" \"--rm\" \"-it\" \"--device\" \"/dev/infiniband\" \"--device\" \"/dev/accel\" \"--security-opt\" \"label=disable\" \"--net\" \"host\" \"--shm-size\" \"10G\" \"--pids-limit\" \"-1\" \"-v\" \"USDHOME:USDHOME\" \"USD{ADDITIONAL_MOUNT_OPTIONS[@]}\" \"--env\" \"HF_TOKEN\" \"--env\" \"HOME\" \"--env\" \"NCCL_DEBUG\" \"--env\" \"VLLM_LOGGING_LEVEL\" \"--env\" \"PT_HPU_ENABLE_LAZY_COLLECTIVES=true\" \"--entrypoint\" \"USDENTRYPOINT\" \"USD{IMAGE_NAME}\")", "sudo bootc upgrade --apply" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/release_notes/rhelai_release_notes
4.3. Converting a virtual machine
4.3. Converting a virtual machine virt-v2v converts virtual machines from a foreign hypervisor to run on Red Hat Enterprise Virtualization. It automatically packages the virtual machine images and metadata, then uploads them to a Red Hat Enterprise Virtualization export storage domain. For more information on export storage domains, see Section 4.2, "Attaching an export storage domain" . virt-v2v always makes a copy of storage before conversion. Figure 4.2. Converting a virtual machine From the export storage domain, the virtual machine images can be imported into Red Hat Enterprise Virtualization using the Administration Portal. Figure 4.3. Importing a virtual machine 4.3.1. Preparing to convert a virtual machine Before a virtual machine can be converted, ensure that the following steps are completed: Procedure 4.2. Preparing to convert a virtual machine Create an NFS export domain. virt-v2v can transfer the converted virtual machine directly to an NFS export storage domain. From the export storage domain, the virtual machine can be imported into a Red Hat Enterprise Virtualization data center. The storage domain must be mountable by the machine running virt-v2v . When exporting to a Red Hat Enterprise Virtualization export domain, virt-v2v must run as root. Note The export storage domain is accessed as an NFS share. By default, Red Hat Enterprise Linux 6 uses NFSv4, which does not require further configuration. However, for NFSv2 and NFSv3 clients, the rpcbind and nfslock services must be running on the host used to run virt-v2v . The network must also be configured to allow NFS access to the storage server. For more details refer to the Red Hat Enterprise Linux Storage Administration Guide . Specify network mappings in virt-v2v.conf . This step is optional , and is not required for most use cases. If your virtual machine has multiple network interfaces, /etc/virt-v2v.conf must be edited to specify the network mapping for all interfaces. You can specify an alternative virt-v2v.conf file with the -f parameter. If you are converting to a virtual machine for output to both libvirt and Red Hat Enterprise Virtualization, separate virt-v2v.conf files should be used for each conversion. This is because a converted bridge will require different configuration depending on the output type (libvirt or Red Hat Enterprise Virtualization). If your virtual machine only has a single network interface, it is simpler to use the --network or --bridge parameters, rather than modifying virt-v2v.conf . Create a profile for the conversion in virt-v2v.conf . This step is optional . Profiles specify a conversion method, storage location, output format and allocation policy. When a profile is defined, it can be called using --profile rather than individually providing the -o , -os , -of and -oa parameters. See virt-v2v.conf (5) for details. 4.3.1.1. Preparing to convert a virtual machine running Linux The following is required when converting virtual machines which run Linux, regardless of which hypervisor they are being converted from. Procedure 4.3. Preparing to convert a virtual machine running Linux Obtain the software. As part of the conversion process, virt-v2v may install a new kernel and drivers on the virtual machine. If the virtual machine being converted is registered to Red Hat Subscription Management (RHSM), the required packages will be automatically downloaded. For environments where Red Hat Subscription Management (RHSM) is not available, the virt-v2v.conf file references a list of RPMs used for this purpose. The RPMs relevant to your virtual machine must be downloaded manually from the Red Hat Customer Portal and made available in the directory specified by the path-root configuration element, which by default is /var/lib/virt-v2v/software/ . virt-v2v will display an error similar to Example 3.1, "Missing Package error" if the software it depends upon for a particular conversion is not available. To obtain the relevant RPMs for your environment, repeat these steps for each missing package: Log in to the Red Hat Customer Portal: https://access.redhat.com/ . In the Red Hat Customer Portal, select Downloads > Product Downloads > Red Hat Enterprise Linux . Select the desired Product Variant , Version , and select the Packages tab. In the Filter field, type the package name exactly matching the one shown in the error message. For the example shown in Example 3.1, "Missing Package error" , the first package is kernel-2.6.32-128.el6.x86_64 A list of packages displays. Select the package name identical to the one in the error message. This opens the details page, which contains a detailed description of the package. Alternatively, to download the most recent version of a package, select Download Latest to the desired package. Save the downloaded package to the appropriate directory in /var/lib/virt-v2v/software . For Red Hat Enterprise Linux 6, the directory is /var/lib/virt-v2v/software/rhel/6 . 4.3.1.2. Preparing to convert a virtual machine running Windows Important virt-v2v does not support conversion of the Windows Recovery Console. If a virtual machine has a recovery console installed and VirtIO was enabled during conversion, attempting to boot the recovery console will result in a stop error. Windows XP x86 does not support the Windows Recovery Console on VirtIO systems, so there is no resolution to this. However, on Windows XP AMD64 and Windows 2003 (x86 and AMD64), the recovery console can be reinstalled after conversion. The re-installation procedure is the same as the initial installation procedure. It is not necessary to remove the recovery console first. Following re-installation, the recovery console will work as intended. Important When converting a virtual machine running Windows with multiple drives, for output to Red Hat Enterprise Virtualization, it is possible in certain circumstances that additional drives will not be displayed by default. Red Hat Enterprise Virtualization will always add a CD-ROM device to a converted virtual machine. If the virtual machine did not have a CD-ROM device before conversion, the new CD-ROM device may be assigned a drive letter which clashes with an existing drive on the virtual machine. This will render the existing device inaccessible. The occluded disk device can still be accessed by manually assigning it a new drive letter. It is also possible to maintain drive letter assignment by manually changing the drive letter assigned to the new CD-ROM device, then rebooting the virtual machine. The following is required when converting virtual machines running Windows, regardless of which hypervisor they are being converted from. The conversion procedure depends on post-processing by the Red Hat Enterprise Virtualization Manager for completion. See Section 7.2.2, "Configuration changes for Windows virtual machines" for details of the process. Procedure 4.4. Preparing to convert a virtual machine running Windows Before a virtual machine running Windows can be converted, ensure that the following steps are completed. Install the libguestfs-winsupport package on the host running virt-v2v . This package provides support for NTFS, which is used by many Windows systems. The libguestfs-winsupport package is provided by the RHEL V2VWIN (v. 6 for 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. An error message similar to Example 4.1, "Error message when converting a Windows virtual machine without libguestfs-winsupport installed" will be shown: Example 4.1. Error message when converting a Windows virtual machine without libguestfs-winsupport installed Install the virtio-win package on the host running virt-v2v . This package provides paravirtualized block and network drivers for Windows guests. The virtio-win package is provided by the RHEL Server Supplementary (v. 6 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine running Windows without the virtio-win package installed, the conversion will fail. An error message similar to Example 3.3, "Error message when converting a Windows virtual machine without virtio-win installed" will be shown. Upload the guest tools ISO to the ISO Storage Domain. Note that the guest tools ISO is not required for the conversion process to succeed. However, it is recommended for all Windows virtual machines running on Red Hat Enterprise Virtualization. The Red Hat Enterprise Virtualization Manager installs Red Hat's Windows drivers on the guest virtual machine using the guest tools ISO after the virtual machines have been converted. See Section 7.2.2, "Configuration changes for Windows virtual machines" for details. Locate and upload the guest tools ISO as follows: Locate the guest tools ISO. The guest tools ISO is distributed using the Red Hat Customer Portal as rhev-guest-tools-iso.rpm , an RPM file installed on the Red Hat Enterprise Virtualization Manager. After installing the Red Hat Enterprise Virtualization Manager, the guest tools ISO can be found at /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso . Upload the guest tools ISO. Upload the guest tools ISO to the ISO Storage Domain using the ISO uploader. Refer to the Red Hat Enterprise Virtualization Administration Guide for more information on uploading ISO files, and installing guest agents and drivers. 4.3.1.3. Preparing to convert a local Xen virtual machine The following is required when converting virtual machines on a host which used to run Xen, but has been updated to run KVM. It is not required when converting a Xen virtual machine imported directly from a running libvirt/Xen instance. Procedure 4.5. Preparing to convert a local Xen virtual machine Obtain the XML for the virtual machine. virt-v2v uses a libvirt domain description to determine the current configuration of the virtual machine, including the location of its storage. Before starting the conversion, obtain this from the host running the virtual machine with the following command: This will require booting into a Xen kernel to obtain the XML, as libvirt needs to connect to a running Xen hypervisor to obtain its metadata. The conversion process is optimized for KVM, so obtaining domain data while running a Xen kernel, then performing the conversion using a KVM kernel will be more efficient than running the conversion on a Xen kernel.
[ "install libguestfs-winsupport", "No operating system could be detected inside this disk image. This may be because the file is not a disk image, or is not a virtual machine image, or because the OS type is not understood by virt-inspector. If you feel this is an error, please file a bug report including as much information about the disk image as possible.", "install virtio-win", "virsh dumpxml guest_name > guest_name.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-RHEV_Converting_a_Virtual_Machine
Chapter 7. Understanding OpenShift Container Platform development
Chapter 7. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 7.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 7.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 7.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 7.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 7.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI. In the Developer perspective, navigate to the +Add view and in the Developer Catalog tile, view all of the available services in the Developer Catalog. Figure 7.2. Choose S2I base images for apps that need specific runtimes 7.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 7.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 7.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 7.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 7.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.17 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 7.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 7.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 7.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/architecture/understanding-development
Chapter 3. Installing the Red Hat JBoss Web Server 6.0
Chapter 3. Installing the Red Hat JBoss Web Server 6.0 You can install the JBoss Web Server 6.0 on Red Hat Enterprise Linux or Microsoft Windows. For more information see the following sections of the installation guide: Installing JBoss Web Server on Red Hat Enterprise Linux from archive files Installing JBoss Web Server on Red Hat Enterprise Linux from RPM packages Installing JBoss Web Server on Microsoft Windows
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/installing_the_red_hat_jboss_web_server_6_0
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are available on Red Hat Enterprise Linux and Microsoft Windows platforms and shipped as a JDK and a JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/pr01
Chapter 21. OpenShift
Chapter 21. OpenShift The namespace for openshift-logging specific metadata Data type group 21.1. openshift.labels Labels added by the Cluster Log Forwarder configuration Data type group
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/openshift
Chapter 3. Making Rules for Issuing Certificates (Certificate Profiles)
Chapter 3. Making Rules for Issuing Certificates (Certificate Profiles) The Certificate System provides a customizable framework to apply policies for incoming certificate requests and to control the input request types and output certificate types; these are called certificate profiles . Certificate profiles set the required information for certificate enrollment forms in the Certificate Manager end-entities page. This chapter describes how to configure certificate profiles. 3.1. About Certificate Profiles A certificate profile defines everything associated with issuing a particular type of certificate, including the authentication method, the authorization method, the default certificate content, constraints for the values of the content, and the contents of the input and output for the certificate profile. Enrollment and renewal requests are submitted to a certificate profile and are then subject to the defaults and constraints set in that certificate profile. These constraints are in place whether the request is submitted through the input form associated with the certificate profile or through other means. The certificate that is issued from a certificate profile request contains the content required by the defaults with the information required by the default parameters. The constraints provide rules for what content is allowed in the certificate. For details about using and customizing certificate profiles, see Section 3.2, "Setting up Certificate Profiles" . The Certificate System contains a set of default profiles. While the default profiles are created to satisfy most deployments, every deployment can add their own new certificate profiles or modify the existing profiles. Authentication. In every certification profile can be specified an authentication method. Authorization. In every certification profile can be specified an authorization method. Profile inputs. Profile inputs are parameters and values that are submitted to the CA when a certificate is requested. Profile inputs include public keys for the certificate request and the certificate subject name requested by the end entity for the certificate. Profile outputs. Profile outputs are parameters and values that specify the format in which to provide the certificate to the end entity. Profile outputs are CMC responses which contain a PKCS#7 certificate chain, when the request was successful. Certificate content. Each certificate defines content information, such as the name of the entity to which it is assigned (the subject name), its signing algorithm, and its validity period. What is included in a certificate is defined in the X.509 standard. With version 3 of the X509 standard, certificates can also contain extensions. For more information about certificate extensions, see Section B.3, "Standard X.509 v3 Certificate Extension Reference" . All of the information about a certificate profile is defined in the set entry of the profile policy in the profile's configuration file. When multiple certificates are expected to be requested at the same time, multiple set entries can be defined in the profile policy to satisfy needs of each certificate. Each policy set consists of a number of policy rules and each policy rule describes a field in the certificate content. A policy rule can include the following parts: Profile defaults. These are predefined parameters and allowed values for information contained within the certificate. Profile defaults include the validity period of the certificate, and what certificate extensions appear for each type of certificate issued. Profile constraints. Constraints set rules or policies for issuing certificates. Amongst other, profile constraints include rules to require the certificate subject name to have at least one CN component, to set the validity of a certificate to a maximum of 360 days, to define the allowed grace period for renewal, or to require that the subjectaltname extension is always set to true . 3.1.1. The Enrollment Profile The parameters for each profile defining the inputs, outputs, and policy sets are listed in more detail in Table 11.1. Profile Configuration File Parameters in Red Hat Certificate System Planning, Installation and Deployment Guide. A profile usually contains inputs, policy sets, and outputs, as illustrated in the caUserCert profile in Example 3.1, "Example caCMCUserCert Profile" . Example 3.1. Example caCMCUserCert Profile The first part of a certificate profile is the description. This shows the name, long description, whether it is enabled, and who enabled it. Note The missing auth.instance_id= entry in this profile means that with this profile, authentication is not needed to submit the enrollment request. However, manual approval by an authorized CA agent will be required to get an issuance. , the profile lists all of the required inputs for the profile: For the caCMCUserCert profile, this defines the certificate request type, which is CMC. , the profile must define the output, meaning the format of the final certificate. The only one available is certOutputImpl , which results in CMC response to be returned to the requestor in case of success. The last - largest - block of configuration is the policy set for the profile. Policy sets list all of the settings that are applied to the final certificate, like its validity period, its renewal settings, and the actions the certificate can be used for. The policyset.list parameter identifies the block name of the policies that apply to one certificate; the policyset.userCertSet.list lists the individual policies to apply. For example, the sixth policy populates the Key Usage Extension automatically in the certificate, according to the configuration in the policy. It sets the defaults and requires the certificate to use those defaults by setting the constraints: 3.1.2. Certificate Extensions: Defaults and Constraints An extension configures additional information to include in a certificate or rules about how the certificate can be used. These extensions can either be specified in the certificate request or taken from the profile default definition and then enforced by the constraints. A certificate extension is added or identified in a profile by adding the default which corresponds to the extension and sets default values, if the certificate extension is not set in the request. For example, the Basic Constraints Extension identifies whether a certificate is a CA signing certificate, the maximum number of subordinate CAs that can be configured under the CA, and whether the extension is critical (required): The extension can also set required values for the certificate request called constraints . If the contents of a request do not match the set constraints, then the request is rejected. The constraints generally correspond to the extension default, though not always. For example: Note To allow user supplied extensions to be embedded in the certificate requests and ignore the system-defined default in the profile, the profile needs to contain the User Supplied Extension Default, which is described in Section B.1.32, "User Supplied Extension Default" . 3.1.3. Inputs and Outputs Inputs set information that must be submitted to receive a certificate. This can be requester information, a specific format of certificate request, or organizational information. The outputs configured in the profile define the format of the certificate that is issued. In Certificate System, profiles are accessed by users through enrollment forms that are accessed through the end-entities pages. (Even clients, such as TPS, submit enrollment requests through these forms.) The inputs, then, correspond to fields in the enrollment forms. The outputs correspond to the information contained on the certificate retrieval pages.
[ "desc=This certificate profile is for enrolling user certificates by using the CMC certificate request with CMC Signature authentication. visible=true enable=true enableBy=admin name=Signed CMC-Authenticated User Certificate Enrollment", "input.list=i1 input.i1.class_id=cmcCertReqInputImp", "output.list=o1 output.o1.class_id=certOutputImpl", "policyset.list=userCertSet policyset.userCertSet.list=1,10,2,3,4,5,6,7,8,9 policyset.userCertSet.6.constraint.class_id=keyUsageExtConstraintImpl policyset.userCertSet.6.constraint.name=Key Usage Extension Constraint policyset.userCertSet.6.constraint.params.keyUsageCritical=true policyset.userCertSet.6.constraint.params.keyUsageDigitalSignature=true policyset.userCertSet.6.constraint.params.keyUsageNonRepudiation=true policyset.userCertSet.6.constraint.params.keyUsageDataEncipherment=false policyset.userCertSet.6.constraint.params.keyUsageKeyEncipherment=true policyset.userCertSet.6.constraint.params.keyUsageKeyAgreement=false policyset.userCertSet.6.constraint.params.keyUsageKeyCertSign=false policyset.userCertSet.6.constraint.params.keyUsageCrlSign=false policyset.userCertSet.6.constraint.params.keyUsageEncipherOnly=false policyset.userCertSet.6.constraint.params.keyUsageDecipherOnly=false policyset.userCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.userCertSet.6.default.name=Key Usage Default policyset.userCertSet.6.default.params.keyUsageCritical=true policyset.userCertSet.6.default.params.keyUsageDigitalSignature=true policyset.userCertSet.6.default.params.keyUsageNonRepudiation=true policyset.userCertSet.6.default.params.keyUsageDataEncipherment=false policyset.userCertSet.6.default.params.keyUsageKeyEncipherment=true policyset.userCertSet.6.default.params.keyUsageKeyAgreement=false policyset.userCertSet.6.default.params.keyUsageKeyCertSign=false policyset.userCertSet.6.default.params.keyUsageCrlSign=false policyset.userCertSet.6.default.params.keyUsageEncipherOnly=false policyset.userCertSet.6.default.params.keyUsageDecipherOnly=false", "policyset.caCertSet.5.default.name=Basic Constraints Extension Default policyset.caCertSet.5.default.params.basicConstraintsCritical=true policyset.caCertSet.5.default.params.basicConstraintsIsCA=true policyset.caCertSet.5.default.params.basicConstraintsPathLen=-1", "policyset.caCertSet.5.constraint.class_id=basicConstraintsExtConstraintImpl policyset.caCertSet.5.constraint.name=Basic Constraint Extension Constraint policyset.caCertSet.5.constraint.params.basicConstraintsCritical=true policyset.caCertSet.5.constraint.params.basicConstraintsIsCA=true policyset.caCertSet.5.constraint.params.basicConstraintsMinPathLen=-1 policyset.caCertSet.5.constraint.params.basicConstraintsMaxPathLen=-1" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/certificate_profiles
Chapter 11. Managing container images
Chapter 11. Managing container images With Satellite, you can import container images from various sources and distribute them to external containers by using content views. For information about containers for Red Hat Enterprise Linux Atomic Host 7, see Getting Started with Containers in Red Hat Enterprise Linux Atomic Host 7 . For information about containers for Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux 8 Building, running, and managing containers . For information about containers for Red Hat Enterprise Linux 9, see Red Hat Enterprise Linux 9 Building, running, and managing containers . 11.1. Importing container images You can import container image repositories from Red Hat Registry or from other image registries. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure with repository discovery In the Satellite web UI, navigate to Content > Products and click Repo Discovery . From the Repository Type list, select Container Images . In the Registry to Discover field, enter the URL of the registry to import images from. In the Registry Username field, enter the name that corresponds with your user name for the container image registry. In the Registry Password field, enter the password that corresponds with the user name that you enter. In the Registry Search Parameter field, enter any search criteria that you want to use to filter your search, and then click Discover . Optional: To further refine the Discovered Repository list, in the Filter field, enter any additional search criteria that you want to use. From the Discovered Repository list, select any repositories that you want to import, and then click Create Selected . Optional: To change the download policy for this container repository to on demand , see Section 4.11, "Changing the download policy for a repository" . Optional: If you want to create a product, from the Product list, select New Product . In the Name field, enter a product name. Optional: In the Repository Name and Repository Label columns, you can edit the repository names and labels. Click Run Repository Creation . When repository creation is complete, you can click each new repository to view more information. Optional: To filter the content you import to a repository, click a repository, and then navigate to Limit Sync Tags . Click to edit, and add any tags that you want to limit the content that synchronizes to Satellite. In the Satellite web UI, navigate to Content > Products and select the name of your product. Select the new repositories and then click Sync Now to start the synchronization process. Procedure with creating a repository manually In the Satellite web UI, navigate to Content > Products . Click the name of the required product. Click New repository . From the Type list, select docker . Enter the details for the repository, and click Save . Select the new repository, and click Sync Now . steps To view the progress of the synchronization, navigate to Content > Sync Status and expand the repository tree. When the synchronization completes, you can click Container Image Manifests to list the available manifests. From the list, you can also remove any manifests that you do not require. CLI procedure Create the custom Red Hat Container Catalog product: Create the repository for the container images: Synchronize the repository: Additional resources For more information about creating a product and repository manually, see Chapter 4, Importing content . 11.2. Managing container name patterns When you use Satellite to create and manage your containers, as the container moves through content view versions and different stages of the Satellite lifecycle environment, the container name changes at each stage. For example, if you synchronize a container image with the name ssh from an upstream repository, when you add it to a Satellite product and organization and then publish as part of a content view, the container image can have the following name: my_organization_production-custom_spin-my_product-custom_ssh . This can create problems when you want to pull a container image because container registries can contain only one instance of a container name. To avoid problems with Satellite naming conventions, you can set a registry name pattern to override the default name to ensure that your container name is clear for future use. Limitations If you use a registry name pattern to manage container naming conventions, because registry naming patterns must generate globally unique names, you might experience naming conflict problems. For example: If you set the repository.docker_upstream_name registry name pattern, you cannot publish or promote content views with container content with identical repository names to the Production lifecycle. If you set the lifecycle_environment.name registry name pattern, this can prevent the creation of a second container repository with the identical name. You must proceed with caution when defining registry naming patterns for your containers. Procedure To manage container naming with a registry name pattern, complete the following steps: In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Create a lifecycle environment or select an existing lifecycle environment to edit. In the Container Image Registry area, click the edit icon to the right of Registry Name Pattern area. Use the list of variables and examples to determine which registry name pattern you require. In the Registry Name Pattern field, enter the registry name pattern that you want to use. For example, to use the repository.docker_upstream_name : Click Save . 11.3. Managing container registry authentication You can manage the authentication settings for accessing containers images from Satellite. By default, users must authenticate to access containers images in Satellite. You can specify whether you want users to authenticate to access container images in Satellite in a lifecycle environment. For example, you might want to permit users to access container images from the Production lifecycle without any authentication requirement and restrict access the Development and QA environments to authenticated users. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Select the lifecycle environment that you want to manage authentication for. To permit unauthenticated access to the containers in this lifecycle environment, select the Unauthenticated Pull checkbox. To restrict unauthenticated access, clear the Unauthenticated Pull checkbox. Click Save . 11.4. Configuring Podman and Docker to trust the certificate authority Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/ and /etc/docker/certs.d/ . Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file ca.crt In the following examples, replace hostname.example.com with satellite.example.com or capsule.example.com , depending on your use case. You might first need to create the relevant location using: or For podman, use: Alternatively, if you are using Docker, copy the root CA file to the equivalent Docker directory: You no longer need to use the --tls-verify=false option when logging in to the registry: 11.5. Using container registries You can use Podman and Docker to fetch content from container registries and push the content to the Satellite container registry. The Satellite registry follows the Open Containers Initiative (OCI) specification, so you can push content to Satellite by using the same methods that apply to other registries. For more information about OCI, see Open Container Initiative Distribution Specification . Prerequisites To push content to Satellite, ensure your Satellite account has the edit_products permission. Ensure that a product exists before pushing a repository. For more information, see Section 4.4, "Creating a custom product" . To pull content from Satellite, ensure that your Satellite account has the view_lifecycle_environments , view_products , and view_content_views permissions, unless the lifecycle environment allows unauthenticated pull. Container registries on Capsules On Capsules with content, the Container Gateway Capsule plugin acts as the container registry. It caches authentication information from Katello and proxies incoming requests to Pulp. The Container Gateway is available by default on Capsules with content. Considerations for pushing content to the Satellite container registry You can only push content to the Satellite Server itself. If you need pushed content on Capsule Servers as well, use Capsule syncing. The pushed container registry name must contain only lowercase characters. Unless pushed repositories are published in a content view version, they do not follow the registry name pattern. For more information, see Section 11.2, "Managing container name patterns" . This is to ensure that users can push and pull from the same path. Users are required to push and pull from the same path. If you use the label-based schema, pull using labels. If you use the ID-based schema, pull using IDs. Procedure Logging in to the container registry: Listing container images: Pulling container images: Pushing container images to the Satellite container registry: To indicate which organization, product, and repository the container image belongs to, include the organization and product in the container registry name. You can address the container destination by using one of the following schemas: After the content push has completed, a repository is created in Satellite.
[ "hammer product create --description \" My_Description \" --name \"Red Hat Container Catalog\" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"", "hammer repository create --content-type \"docker\" --docker-upstream-name \"rhel7\" --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\" --url \"http://registry.access.redhat.com/\"", "hammer repository synchronize --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\"", "<%= repository.docker_upstream_name %>", "mkdir -p /etc/containers/certs.d/hostname.example.com", "mkdir -p /etc/docker/certs.d/hostname.example.com", "cp rootCA.pem /etc/containers/certs.d/hostname.example.com/ca.crt", "cp rootCA.pem /etc/docker/certs.d/hostname.example.com/ca.crt", "podman login hostname.example.com Username: admin Password: Login Succeeded!", "podman login satellite.example.com", "podman search satellite.example.com/", "podman pull satellite.example.com/my-image:<optional_tag>", "podman push My_Container_Image_Hash satellite.example.com / My_Organization_Label / My_Product_Label / My_Repository_Name [:_My_Tag_] podman push My_Container_Image_Hash satellite.example.com /id/ My_Organization_ID / My_Product_ID / My_Repository_Name [:_My_Tag_]" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_container_images_content-management
Part II. Clair on Red Hat Quay
Part II. Clair on Red Hat Quay This guide contains procedures for running Clair on Red Hat Quay in both standalone and OpenShift Container Platform Operator deployments.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/testing-clair-with-quay
Chapter 12. Configuring CUPS to store logs in files instead of the systemd journal
Chapter 12. Configuring CUPS to store logs in files instead of the systemd journal By default, CUPS stores log messages in the systemd journal. Alternatively, you can configure CUPS to store log messages in files. Prerequisites CUPS is installed . Procedure Edit the /etc/cups/cups-files.conf file, and set the AccessLog , ErrorLog , and PageLog parameters to the paths where you want to store these log files: If you configure CUPS to store the logs in a directory other than /var/log/cups/ , set the cupsd_log_t SELinux context on this directory, for example: Restart the cups service: Verification Display the log files: If you configured CUPS to store the logs in a directory other than /var/log/cups/ , verify that the SELinux context on the log directory is cupsd_log_t :
[ "AccessLog /var/log/cups/access_log ErrorLog /var/log/cups/error_log PageLog /var/log/cups/page_log", "semanage fcontext -a -t cupsd_log_t \" /var/log/printing (/.*)?\" restorecon -Rv /var/log/printing/", "systemctl restart cups", "cat /var/log/cups/access_log cat /var/log/cups/error_log cat /var/log/cups/page_log", "ls -ldZ /var/log/printing/ drwxr-xr-x. 2 lp sys unconfined_u:object_r: cupsd_log_t :s0 6 Jun 20 15:55 /var/log/printing/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/configuring-cups-to-store-logs-in-files-instead-of-the-systemd-journal_configuring-printing
Chapter 132. KafkaBridgeStatus schema reference
Chapter 132. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL at which external client applications can access the Kafka Bridge. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkabridgestatus-reference
Chapter 74. Kubernetes Replication Controller
Chapter 74. Kubernetes Replication Controller Since Camel 2.17 Both producer and consumer are supported The Kubernetes Replication Controller component is one of the Kubernetes Components which provides a producer to execute Kubernetes Replication controller operations and a consumer to consume events related to Replication Controller objects. 74.1. Dependencies When using kubernetes-replication-controllers with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 74.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 74.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 74.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 74.3. Component Options The Kubernetes Replication Controller component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 74.4. Endpoint Options The Kubernetes Replication Controller endpoint is configured using URI syntax: with the following path and query parameters: 74.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 74.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 74.5. Message Headers The Kubernetes Replication Controller component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesReplicationControllersLabels (producer) Constant: KUBERNETES_REPLICATION_CONTROLLERS_LABELS The replication controller labels. Map CamelKubernetesReplicationControllerName (producer) Constant: KUBERNETES_REPLICATION_CONTROLLER_NAME The replication controller name. String CamelKubernetesReplicationControllerSpec (producer) Constant: KUBERNETES_REPLICATION_CONTROLLER_SPEC The spec for a replication controller. ReplicationControllerSpec CamelKubernetesReplicationControllerReplicas (producer) Constant: KUBERNETES_REPLICATION_CONTROLLER_REPLICAS The number of replicas for a replication controller during the Scale operation. Integer CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 74.6. Supported producer operation listReplicationControllers listReplicationControllersByLabels getReplicationController createReplicationController updateReplicationController deleteReplicationController scaleReplicationController 74.7. Kubernetes Replication Controllers Producer Examples listReplicationControllers: this operation list the RCs on a kubernetes cluster. from("direct:list"). toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllers"). to("mock:result"); This operation returns a List of RCs from your cluster. listReplicationControllersByLabels: this operation list the RCs by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_REPLICATION_CONTROLLERS_LABELS, labels); } }); toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllersByLabels"). to("mock:result"); This operation returns a List of RCs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 74.8. Kubernetes Replication Controllers Consumer Example fromF("kubernetes-replication-controllers://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); ReplicationController rc = exchange.getIn().getBody(ReplicationController.class); log.info("Got event with configmap name: " + rc.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the rc test. 74.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-replication-controllers:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllers\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_REPLICATION_CONTROLLERS_LABELS, labels); } }); toF(\"kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllersByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-replication-controllers://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); ReplicationController rc = exchange.getIn().getBody(ReplicationController.class); log.info(\"Got event with configmap name: \" + rc.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-replication-controller-component-starter
25.7. Changing the Password or Public Key of a Vault
25.7. Changing the Password or Public Key of a Vault The owner of a vault can change the vault's password. Depending on whether the vault is symmetric or asymmetric, the command differs: To change the password of a symmetric vault: To change the public key of an asymmetric vault:
[ "ipa vault-mod --change-password Vault name: example_symmetric_vault Password: old_password New password: new_password Enter New password again to verify: new_password ----------------------- Modified vault \" example_symmetric_vault \" ----------------------- Vault name: example_symmetric_vault Type: symmetric Salt: dT+M+4ik/ltgnpstmCG1sw== Owner users: admin Vault user: admin", "ipa vault-mod example_asymmetric_vault --private-key-file= old_private_key.pem --public-key-file= new_public_key.pem ------------------------------- Modified vault \" example_assymmetric_vault \" ------------------------------- Vault name: example_assymmetric_vault Typ: asymmetric Public key: Owner users: admin Vault user: admin" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/chainging_the-password-or-public-key-of-a-vault
10.5. Quorum Devices
10.5. Quorum Devices Red Hat Enterprise Linux 7.4 provides full support for the ability to configure a separate quorum device which acts as a third-party arbitration device for the cluster. Its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation. You must take the following into account when configuring a quorum device. It is recommended that a quorum device be run on a different physical network at the same site as the cluster that uses the quorum device. Ideally, the quorum device host should be in a separate rack than the main cluster, or at least on a separate PSU and not on the same network segment as the corosync ring or rings. You cannot use more than one quorum device in a cluster at the same time. Although you cannot use more than one quorum device in a cluster at the same time, a single quorum device may be used by several clusters at the same time. Each cluster using that quorum device can use different algorithms and quorum options, as those are stored on the cluster nodes themselves. For example, a single quorum device can be used by one cluster with an ffsplit (fifty/fifty split) algorithm and by a second cluster with an lms (last man standing) algorithm. A quorum device should not be run on an existing cluster node. 10.5.1. Installing Quorum Device Packages Configuring a quorum device for a cluster requires that you install the following packages: Install corosync-qdevice on the nodes of an existing cluster. Install pcs and corosync-qnetd on the quorum device host. Start the pcsd service and enable pcsd at system start on the quorum device host. 10.5.2. Configuring a Quorum Device This section provides a sample procedure to configure a quorum device in a Red Hat high availability cluster. The following procedure configures a quorum device and adds it to the cluster. In this example: The node used for a quorum device is qdevice . The quorum device model is net , which is currently the only supported model. The net model supports the following algorithms: ffsplit : fifty-fifty split. This provides exactly one vote to the partition with the highest number of active nodes. lms : last-man-standing. If the node is the only one left in the cluster that can see the qnetd server, then it returns a vote. Warning The LMS algorithm allows the cluster to remain quorate even with only one remaining node, but it also means that the voting power of the quorum device is great since it is the same as number_of_nodes - 1. Losing connection with the quorum device means losing number_of_nodes - 1 votes, which means that only a cluster with all nodes active can remain quorate (by overvoting the quorum device); any other cluster becomes inquorate. For more detailed information on the implementation of these algorithms, see the corosync-qdevice (8) man page. The cluster nodes are node1 and node2 . The following procedure configures a quorum device and adds that quorum device to a cluster. On the node that you will use to host your quorum device, configure the quorum device with the following command. This command configures and starts the quorum device model net and configures the device to start on boot. After configuring the quorum device, you can check its status. This should show that the corosync-qnetd daemon is running and, at this point, there are no clients connected to it. The --full command option provides detailed output. Enable the ports on the firewall needed by the pcsd daemon and the net quorum device by enabling the high-availability service on firewalld with following commands. From one of the nodes in the existing cluster, authenticate user hacluster on the node that is hosting the quorum device. Add the quorum device to the cluster. Before adding the quorum device, you can check the current configuration and status for the quorum device for later comparison. The output for these commands indicates that the cluster is not yet using a quorum device. The following command adds the quorum device that you have previously created to the cluster. You cannot use more than one quorum device in a cluster at the same time. However, one quorum device can be used by several clusters at the same time. This example command configures the quorum device to use the ffsplit algorithm. For information on the configuration options for the quorum device, see the corosync-qdevice (8) man page. Check the configuration status of the quorum device. From the cluster side, you can execute the following commands to see how the configuration has changed. The pcs quorum config shows the quorum device that has been configured. The pcs quorum status command shows the quorum runtime status, indicating that the quorum device is in use. The pcs quorum device status shows the quorum device runtime status. From the quorum device side, you can execute the following status command, which shows the status of the corosync-qnetd daemon. 10.5.3. Managing the Quorum Device Service PCS provides the ability to manage the quorum device service on the local host ( corosync-qnetd ), as shown in the following example commands. Note that these commands affect only the corosync-qnetd service. 10.5.4. Managing the Quorum Device Settings in a Cluster The following sections describe the PCS commands that you can use to manage the quorum device settings in a cluster, showing examples that are based on the quorum device configuration in Section 10.5.2, "Configuring a Quorum Device" . 10.5.4.1. Changing Quorum Device Settings You can change the setting of a quorum device with the pcs quorum device update command. Warning To change the host option of quorum device model net , use the pcs quorum device remove and the pcs quorum device add commands to set up the configuration properly, unless the old and the new host are the same machine. The following command changes the quorum device algorithm to lms . 10.5.4.2. Removing a Quorum Device Use the following command to remove a quorum device configured on a cluster node. After you have removed a quorum device, you should see the following error message when displaying the quorum device status. 10.5.4.3. Destroying a Quorum Device To disable and stop a quorum device on the quorum device host and delete all of its configuration files, use the following command.
[ "yum install corosync-qdevice yum install corosync-qdevice", "yum install pcs corosync-qnetd", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs qdevice setup model net --enable --start Quorum device 'net' initialized quorum device enabled Starting quorum device quorum device started", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytes", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "pcs cluster auth qdevice Username: hacluster Password: qdevice: Authorized", "pcs quorum config Options:", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:15:36 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 1 Flags: 2Node Quorate Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 NR node1 (local) 2 1 NR node2", "pcs quorum device add model net host=qdevice algorithm=ffsplit Setting up qdevice certificates on nodes node2: Succeeded node1: Succeeded Enabling corosync-qdevice node1: corosync-qdevice enabled node2: corosync-qdevice enabled Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Starting corosync-qdevice node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum config Options: Device: Model: net algorithm: ffsplit host: qdevice", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:17:02 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW node1 (local) 2 1 A,V,NMW node2 0 1 Qdevice", "pcs quorum device status Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 Membership node list: 1, 2 Qdevice-net information ---------------------- Cluster name: mycluster QNetd host: qdevice:5403 Algorithm: ffsplit Tie-breaker: Node with lowest node ID State: Connected", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 2 Connected clusters: 1 Maximum send/receive size: 32768/32768 bytes Cluster \"mycluster\": Algorithm: ffsplit Tie-breaker: Node with lowest node ID Node ID 2: Client address: ::ffff:192.168.122.122:50028 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 1: Client address: ::ffff:192.168.122.121:48786 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK)", "pcs qdevice start net pcs qdevice stop net pcs qdevice enable net pcs qdevice disable net pcs qdevice kill net", "pcs quorum device update model algorithm=lms Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Reloading qdevice configuration on nodes node1: corosync-qdevice stopped node2: corosync-qdevice stopped node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum device remove Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Disabling corosync-qdevice node1: corosync-qdevice disabled node2: corosync-qdevice disabled Stopping corosync-qdevice node1: corosync-qdevice stopped node2: corosync-qdevice stopped Removing qdevice certificates from nodes node1: Succeeded node2: Succeeded", "pcs quorum device status Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory", "pcs qdevice destroy net Stopping quorum device quorum device stopped quorum device disabled Quorum device 'net' configuration files removed" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-quorumdev-HAAR
Chapter 2. Configuring the overcloud for IPv6
Chapter 2. Configuring the overcloud for IPv6 The following chapter provides the configuration required before running the openstack overcloud deploy command. This includes preparing nodes for provisioning, configuring an IPv6 address on the undercloud, and creating a network environment file to define the IPv6 parameters for the overcloud. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. 2.1. Configuring an IPv6 address on the undercloud The undercloud requires access to the overcloud Public API, which is on the External network. To accomplish this, the undercloud host requires an IPv6 address on the interface that connects to the External network. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. An IPv6 address available to the undercloud. Native VLAN or dedicated interface If the undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0 : Trunked VLAN interface If the undercloud uses a trunked VLAN on the same interface as the control plane bridge ( br-ctlplane ) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. In this example, the External network VLAN ID is 100 : Confirming the IPv6 address Confirm the addition of the IPv6 address with the ip command: The IPv6 address appears on the chosen interface. Setting a persistent IPv6 address To make the IPv6 address permanent, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/ . In this example, include the following lines in either ifcfg-eth0 or ifcfg-vlan100 : For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal. 2.2. Registering and inspecting nodes for IPv6 deployment A node definition template ( instackenv.json ) is a JSON format file that contains the hardware and power management details for registering nodes. For example: Prerequisites A successful undercloud installation. For more information, see Installing director . Nodes available for overcloud deployment. Procedure After you create the node definition template, save the file to the home directory of the stack user ( /home/stack/instackenv.json ), then import it into the director: This command imports the template and registers each node from the template into director. Assign the kernel and ramdisk images to all nodes: The nodes are now registered and configured in director. Verification steps After registering the nodes, inspect the hardware attribute of each node: Important The nodes must be in the manageable state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes. 2.3. Tagging nodes for IPv6 deployment After you register and inspect the hardware of your nodes, tag each node into a specific profile. These profile tags map your nodes to flavors, and in turn the flavors are assigned to a deployment role. Prerequisites A successful undercloud installation. For more information, see Installing director . Procedure Retrieve a list of your nodes to identify their UUIDs: Add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and three nodes to use a compute profile, use the following commands: The addition of the profile:control and profile:compute options tag the nodes into each respective profile. Note As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data. 2.4. Configuring IPv6 networking By default, the overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. Director includes a set of environment files that you can use to create IPv6-based Overclouds. For more information about configuring IPv6 in the Overcloud, see the dedicated Configuring IPv6 networking for the overcloud guide for full instructions. 2.4.1. Configuring composable IPv6 networking Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Copy the default network_data file: Edit the local copy of the network_data.yaml file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details: name is the only mandatory value, however you can also use name_lower to normalize names for readability. For example, changing InternalApi to internal_api . vip: true creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network. ipv6 defines whether to enable IPv6. ipv6_subnet and ipv6_allocation_pools , and gateway_ip6 set the default IPv6 subnet and IP range for the network. Include the custom network_data file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details. 2.4.2. IPv6 network isolation in the overcloud The overcloud assigns services to the provisioning network by default. However, director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml . When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate that the service is running on an IPv6 network. The network that each service runs on is defined by the file network/service_net_map.yaml , and can be overridden by declaring parameter defaults for individual ServiceNetMap entries. These services require the parameter default to be set in an environment file: The environments/network-isolation.j2.yaml file in the core heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry. 2.4.3. Configuring the IPv6 isolated network The default heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml . When rendered with our network_data file, it results in a standard YAML file called network-environment.yaml . Some parts of this file might require overrides. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Create a custom environment file ( /home/stack/network-environment.yaml ) with the following details: The parameter_defaults section contains the customization for certain services that remain on IPv4. 2.4.4. IPv6 network interface templates The overcloud requires a set of network interface templates. Director contains a set of Jinja2-based Heat templates, which render based on your network_data file: NIC directory Description Environment file single-nic-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Open vSwitch bridge. environments/net-single-nic-with-vlans-v6.j2.yaml single-nic-linux-bridge-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Linux bridge. environments/net-single-nic-linux-bridge-with-vlans-v6.yaml bond-with-vlans Control plane attached to nic1 . Default Open vSwitch bridge with bonded NIC configuration ( nic2 and nic3 ) and VLANs attached. environments/net-bond-with-vlans-v6.yaml multiple-nics Control plane attached to nic1 . Assigns each sequential NIC to each network defined in the network_data file. By default, this is Storage to nic2 , Storage Management to nic3 , Internal API to nic4 , Tenant to nic5 on the br-tenant bridge, and External to nic6 on the default Open vSwitch bridge. environments/net-multiple-nics-v6.yaml 2.5. Deploying an IPv6 overcloud To deploy an overcloud that uses IPv6 networking, you must include additional arguments in the deployment command. Prerequisites A successful undercloud installation. For more information, see Installing director . Procedure The above command uses the following options: --templates - Creates the overcloud from the default heat template collection. -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /home/stack/network-environment.yaml - Adds an additional environment file to the overcloud deployment. In this case, it includes overrides related to IPv6. Ensure that the network_data.yaml file includes the setting ipv6: true . versions of Red Hat OpenStack director included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, ensure that the Controller definition in the roles_data.yaml file contains both networks in the default_route_networks parameter. For example, default_route_networks: ['External', 'ControlPlane'] . --ntp-server pool.ntp.org - Sets the NTP server. The overcloud creation process begins and director provisions the overcloud nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and run: Accessing the overcloud Director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file ( overcloudrc ) in the home directory of the stack user. Run the following command to use this file: This loads the necessary environment variables to interact with your overcloud from the director host CLI. To return to interacting with the director host, run the following command:
[ "sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0", "sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100", "ip addr", "IPV6INIT=yes IPV6ADDR=2001:db8::1/64", "{ \"nodes\":[ { \"mac\":[ \"bb:bb:bb:bb:bb:bb\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"cc:cc:cc:cc:cc:cc\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" } { \"mac\":[ \"ff:ff:ff:ff:ff:ff\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" } { \"mac\":[ \"gg:gg:gg:gg:gg:gg\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" } ] }", "openstack overcloud node import ~/instackenv.json", "openstack overcloud node configure", "openstack overcloud node introspect --all-manageable", "ironic node-list", "ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'", "cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.", "- name: External vip: true name_lower: external vlan: 10 ipv6: true ipv6_subnet: '2001:db8:fd00:1000::/64' ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}] gateway_ipv6: '2001:db8:fd00:1000::1'", "parameter_defaults: # Enable IPv6 for Ceph. CephIPv6: True # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster. CorosyncIPv6: True # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP. MongoDbIPv6: True # Enable various IPv6 features in Nova. NovaIPv6: True # Enable IPv6 environment for RabbitMQ. RabbitIPv6: True # Enable IPv6 environment for Memcached. MemcachedIPv6: True # Enable IPv6 environment for MySQL. MysqlIPv6: True # Enable IPv6 environment for Manila ManilaIPv6: True # Enable IPv6 environment for Redis. RedisIPv6: True", "parameter_defaults: ControlPlaneDefaultRoute: 192.0.2.1 ControlPlaneSubnetCidr: \"24\"", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/templates/network-environment.yaml --ntp-server pool.ntp.org [ADDITIONAL OPTIONS]", "source ~/stackrc heat stack-list --show-nested", "source ~/overcloudrc", "source ~/stackrc" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_ipv6_networking_for_the_overcloud/assembly_configuring-the-overcloud-for-ipv6
probe::scheduler.kthread_stop.return
probe::scheduler.kthread_stop.return Name probe::scheduler.kthread_stop.return - A kthread is stopped and gets the return value Synopsis scheduler.kthread_stop.return Values return_value return value after stopping the thread name name of the probe point
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-kthread-stop-return
Chapter 52. Compiler and Tools
Chapter 52. Compiler and Tools GCC thread sanitizer included in RHEL no longer works Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL. As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer. (BZ#1569484) ksh with the KEYBD trap mishandles multibyte characters The Korn Shell (KSH) is unable to correctly handle multibyte characters when the KEYBD trap is enabled. Consequently, when the user enters, for example, Japanese characters, ksh displays an incorrect string. To work around this problem, disable the KEYBD trap in the /etc/kshrc file by commenting out the following line: For more details, see a related Knowledgebase solution . (BZ# 1503922 )
[ "trap keybd_trap KEYBD" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_compiler_and_tools
Chapter 1. Red Hat build of OpenJDK 11 - End of full support
Chapter 1. Red Hat build of OpenJDK 11 - End of full support Important The 11.0.25 release is the last release of Red Hat build of OpenJDK 11 that Red Hat plans to fully support. The full support for Red Hat build of OpenJDK 11 ends on 31 October 2024. See the Product Life Cycles page for details. Red Hat will provide extended life cycle support (ELS) phase 1 support for Red Hat build of OpenJDK 11 until 31 October 2027. For more information about product life cycle phases and available support levels, see Life Cycle Phases . For information about migrating to Red Hat build of OpenJDK version 17 or 21, see Migrating to Red Hat build of OpenJDK 17 from earlier versions or Migrating to Red Hat build of OpenJDK 21 from earlier versions .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/endfullsupport
Chapter 21. Debugging a Crashed Application
Chapter 21. Debugging a Crashed Application Sometimes, it is not possible to debug an application directly. In these situations, you can collect information about the application at the moment of its termination and analyze it afterwards. 21.1. Core Dumps This section describes what a core dump is and how to use it. Prerequisites Understanding of debugging information Description A core dump is a copy of a part of the application's memory at the moment the application stopped working, stored in the ELF format. It contains all the application's internal variables and stack, which enables inspection of the application's final state. When augmented with the respective executable file and debugging information, it is possible to analyze a core dump file with a debugger in a way similar to analyzing a running program. The Linux operating system kernel can record core dumps automatically, if this functionality is enabled. Alternatively, you can send a signal to any running application to generate a core dump regardless of its actual state. Warning Some limits might affect the ability to generate a core dump. 21.2. Recording Application Crashes with Core Dumps To record application crashes, set up core dump saving and add information about the system. Procedure Enable core dumps. Edit the file /etc/systemd/system.conf and change the line containing DefaultLimitCORE to the following: Reboot the system: Remove the limits for core dump sizes: To reverse this change, run the command with the value 0 instead of unlimited . When an application crashes, a core dump is generated. The default location for core dumps is the application's working directory at the time of the crash. Create an SOS report to provide additional information about the system: This creates a tar archive containing information about your system, such as copies of configuration files. Transfer the core dump and the SOS report to the computer where the debugging will take place. Transfer the executable file, too, if it is known. Important If the executable file is not known, subsequent analysis of the core file will identify it. Optional: Remove the core dump and SOS report after transferring them to free up disk space. Additional Resources Knowledgebase article - How to enable core file dumps when an application crashes or segmentation faults Knowledgebase article - What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? 21.3. Inspecting Application Crash States with Core Dumps Prerequisites You have a core dump file and SOS report GDB and elfutils are installed on the system Procedure To identify the executable file where the crash occurred, run the eu-unstrip command with the core dump file: The output contains details for each module on one line, separated by spaces. The information is listed in this order: The memory address where the module was mapped The build-id of the module and where in the memory it was found The module's executable file name, displayed as - when unknown, or as . when the module has not been loaded from a file The source of debugging information, displayed as a file name when available, as . when contained in the executable file itself, or as - when not present at all The shared library name ( soname ), or [exe] for the main module In this example, the important details are the file name /usr/bin/sleep and the build-id 2818b2009547f780a5639c904cded443e564973e on the line containing the text [exe] . With this information, you can identify the executable file required for analyzing the core dump. Get the executable file that crashed. If possible, copy it from the system where the crash occurred. Use the file name extracted from the core file. Alternatively, use an identical executable file on your system. Each executable file built on Red Hat Enterprise Linux contains a note with a unique build-id value. Determine the build-id of the relevant locally available executable files: Use this information to match the executable file on the remote system with your local copy. The build-id of the local file and build-id listed in the core dump must match. Finally, if the application is installed from an RPM package, you can get the executable file from the package. Use the sosreport output to find the exact version of the package required. Get the shared libraries used by the executable file. Use the same steps as for the executable file. If the application is distributed as a package, load the executable file in GDB to display hints for missing debuginfo packages. For more details, see Section 20.1.4, "Getting debuginfo Packages for an Application or Library using GDB" . To examine the core file in detail, load the executable file and core dump file with GDB: Further messages about missing files and debugging information help you to identify what is missing for the debugging session. Return to the step if needed. If the debugging information is available as a file instead of a package, load this file in GDB with the symbol-file command: Replace program.debug with the actual file name. Note It might not be necessary to install the debugging information for all executable files contained in the core dump. Most of these executable files are libraries used by the application code. These libraries might not directly contribute to the problem you are analyzing, and you do not need to include debugging information for them. Use the GDB commands to inspect the state of the application at the moment it crashed. See Section 20.2, "Inspecting the Application's Internal State with GDB" . Note When analyzing a core file, GDB is not attached to a running process. Commands for controlling execution have no effect. Additional Resources Debugging with GDB - 2.1.1 Choosing Files Debugging with GDB - 18.1 Commands to Specify Files Debugging with GDB - 18.3 Debugging Information in Separate Files 21.4. Dumping Process Memory with gcore The workflow of core dump debugging enables the analysis of the offline state of the program. In some cases it is advantageous to use this workflow with a program that is still running, such as when it is hard to access the environment with the process. You can use the gcore command to dump memory of any process while it is still running. Prerequisites Understanding of core dumps GDB is installed on the system Procedure To dump a process memory using gcore : Find out the process id ( pid ). Use tools such as ps , pgrep , and top : Dump the memory of this process: This creates a file filename and dumps the process memory in it. While the memory is being dumped, the execution of the process is halted. After the core dump is finished, the process resumes normal execution. Create an SOS report to provide additional information about the system: This creates a tar archive containing information about your system, such as copies of configuration files. Transfer the program's executable file, core dump, and the SOS report to the computer where the debugging will take place. Optional: Remove the core dump and SOS report after transferring them to reclaim disk space. Additional resources Knowledgebase article - How to obtain a core file without restarting an application? 21.5. Dumping Protected Process Memory with GDB You can mark the memory of processes as not to be dumped. This can save resources and ensure additional security if the process memory contains sensitive data. Both kernel core dumps ( kdump ) and manual core dumps ( gcore , GDB) do not dump memory marked this way. In some cases, it is necessary to dump the whole contents of the process memory regardless of these protections. This procedure shows how to do this using the GDB debugger. Prerequisites Understanding of core dumps GDB is installed on the system GDB is attached to the process with protected memory Procedure Set GDB to ignore the settings in the /proc/PID/coredump_filter file: Set GDB to ignore the memory page flag VM_DONTDUMP : Dump the memory: Replace core-file with the name of the file of which you want to dump the memory. Additional Resources Debugging with GDB - 10.19 How to Produce a Core File from Your Program
[ "DefaultLimitCORE=infinity", "shutdown -r now", "ulimit -c unlimited", "sosreport", "eu-unstrip -n --core= ./core.9814 0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2", "eu-readelf -n executable_file", "gdb -e executable_file -c core_file", "(gdb) symbol-file program.debug", "ps -C some-program", "gcore -o filename pid", "sosreport", "(gdb) set use-coredump-filter off", "(gdb) set dump-excluded-mappings on", "(gdb) gcore core-file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/debugging-crashed-application
7.216. tomcatjss
7.216. tomcatjss 7.216.1. RHBA-2015:1316 - tomcatjss bug fix and enhancement update An updated tomcatjss package that fixes one bug and adds one enhancement is now available for Red Hat Enterprise Linux 6. The tomcatjss package provides a Java Secure Socket Extension (JSSE) implementation using Java Security Services (JSS) for Tomcat, an open source web server and Java servlet container. Bug Fix BZ# 1190911 Previously, the init() function in tomcatjss looked for the clientauth attribute which was not present. As a consequence, Tomcat returned NullPointerException in init() on startup, and in addition, some properties, such as enableOSCP and properties for enabling certain SSL ciphers, were not called. A patch has been applied to fix this problem. As a result, NullPointerException no longer occurs in the described situation, and the mentioned properties are called as expected. Enhancement BZ# 1167471 The Tomcat service has been updated to support the Transport Layer Security cryptographic protocol version 1.1 (TLSv1.1) and the Transport Layer Security cryptographic protocol version 1.2 (TLSv1.2) using JSS. Users of tomcatjss are advised to upgrade to this updated package, which fixes this bug and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-tomcatjss
Chapter 14. Constant
Chapter 14. Constant Overview The constant language is a trivial built-in language that is used to specify a plain text string. This makes it possible to provide a plain text string in any context where an expression type is expected. XML example In XML, you can set the username header to the value, Jane Doe as follows: Java example In Java, you can set the username header to the value, Jane Doe as follows:
[ "<camelContext> <route> <from uri=\" SourceURL \"/> <setHeader headerName=\"username\"> <constant>Jane Doe</constant> </setHeader> <to uri=\" TargetURL \"/> </route> </camelContext>", "from(\" SourceURL \") .setHeader(\"username\", constant(\"Jane Doe\")) .to(\" TargetURL \");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Constant
Chapter 16. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules
Chapter 16. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules As a storage administrator, you can use cephadm-ansible modules in Ansible playbooks to administer your Red Hat Ceph Storage cluster. The cephadm-ansible package provides several modules that wrap cephadm calls to let you write your own unique Ansible playbooks to administer your cluster. Note At this time, cephadm-ansible modules only support the most important tasks. Any operation not covered by cephadm-ansible modules must be completed using either the command or shell Ansible modules in your playbooks. 16.1. The cephadm-ansible modules The cephadm-ansible modules are a collection of modules that simplify writing Ansible playbooks by providing a wrapper around cephadm and ceph orch commands. You can use the modules to write your own unique Ansible playbooks to administer your cluster using one or more of the modules. The cephadm-ansible package includes the following modules: cephadm_bootstrap ceph_orch_host ceph_config ceph_orch_apply ceph_orch_daemon cephadm_registry_login 16.2. The cephadm-ansible modules options The following tables list the available options for the cephadm-ansible modules. Options listed as required need to be set when using the modules in your Ansible playbooks. Options listed with a default value of true indicate that the option is automatically set when using the modules and you do not need to specify it in your playbook. For example, for the cephadm_bootstrap module, the Ceph Dashboard is installed unless you set dashboard: false . Table 16.1. Available options for the cephadm_bootstrap module. cephadm_bootstrap Description Required Default mon_ip Ceph Monitor IP address. true image Ceph container image. false docker Use docker instead of podman . false fsid Define the Ceph FSID. false pull Pull the Ceph container image. false true dashboard Deploy the Ceph Dashboard. false true dashboard_user Specify a specific Ceph Dashboard user. false dashboard_password Ceph Dashboard password. false monitoring Deploy the monitoring stack. false true firewalld Manage firewall rules with firewalld. false true allow_overwrite Allow overwrite of existing --output-config, --output-keyring, or --output-pub-ssh-key files. false false registry_url URL for custom registry. false registry_username Username for custom registry. false registry_password Password for custom registry. false registry_json JSON file with custom registry login information. false ssh_user SSH user to use for cephadm ssh to hosts. false ssh_config SSH config file path for cephadm SSH client. false allow_fqdn_hostname Allow hostname that is a fully-qualified domain name (FQDN). false false cluster_network Subnet to use for cluster replication, recovery and heartbeats. false Table 16.2. Available options for the ceph_orch_host module. ceph_orch_host Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false name Name of the host to add, remove, or update. true address IP address of the host. true when state is present . set_admin_label Set the _admin label on the specified host. false false labels The list of labels to apply to the host. false [] state If set to present , it ensures the name specified in name is present. If set to absent , it removes the host specified in name . If set to drain , it schedules to remove all daemons from the host specified in name . false present Table 16.3. Available options for the ceph_config module ceph_config Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false action Whether to set or get the parameter specified in option . false set who Which daemon to set the configuration to. true option Name of the parameter to set or get . true value Value of the parameter to set. true if action is set Table 16.4. Available options for the ceph_orch_apply module. ceph_orch_apply Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false spec The service specification to apply. true Table 16.5. Available options for the ceph_orch_daemon module. ceph_orch_daemon Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false state The desired state of the service specified in name . true If started , it ensures the service is started. If stopped , it ensures the service is stopped. If restarted , it will restart the service. daemon_id The ID of the service. true daemon_type The type of service. true Table 16.6. Available options for the cephadm_registry_login module cephadm_registry_login Description Required Default state Login or logout of a registry. false login docker Use docker instead of podman . false registry_url The URL for custom registry. false registry_username Username for custom registry. true when state is login . registry_password Password for custom registry. true when state is login . registry_json The path to a JSON file. This file must be present on remote hosts prior to running this task. This option is currently not supported. 16.3. Bootstrapping a storage cluster using the cephadm_bootstrap and cephadm_registry_login modules As a storage administrator, you can bootstrap a storage cluster using Ansible by using the cephadm_bootstrap and cephadm_registry_login modules in your Ansible playbook. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Red Hat Enterprise Linux 8.10 or 9.4 or later with ansible-core bundled into AppStream. Installation of the cephadm-ansible package on the Ansible administration node. Passwordless SSH is set up on all hosts in the storage cluster. Hosts are registered with CDN. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create the hosts file and add hosts, labels, and monitor IP address of the first host in the storage cluster: Syntax Example Run the preflight playbook: Syntax Example Create a playbook to bootstrap your cluster: Syntax Example Run the playbook: Syntax Example Verification Review the Ansible output after running the playbook. 16.4. Adding or removing hosts using the ceph_orch_host module As a storage administrator, you can add and remove hosts in your storage cluster by using the ceph_orch_host module in your Ansible playbook. Prerequisites A running Red Hat Ceph Storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. New hosts have the storage cluster's public SSH key. For more information about copying the storage cluster's public SSH keys to new hosts, see Adding hosts in the Red Hat Ceph Storage Installation Guide . Procedure Use the following procedure to add new hosts to the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Add the new hosts and labels to the Ansible inventory file. Syntax Example Note If you have previously added the new hosts to the Ansible inventory file and ran the preflight playbook on the hosts, skip to step 3. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chronyd , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. Create a playbook to add the new hosts to the cluster: Syntax Note By default, Ansible executes all tasks on the host that matches the hosts line of your playbook. The ceph orch commands must run on the host that contains the admin keyring and the Ceph configuration file. Use the delegate_to keyword to specify the admin host in your cluster. Example In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts. Run the playbook to add additional hosts to the cluster: Syntax Example Use the following procedure to remove hosts from the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook to remove a host or hosts from the cluster: Syntax Example In this example, the playbook tasks drain all daemons on host07 , removes the host from the cluster, and displays a current list of hosts. Run the playbook to remove host from the cluster: Syntax Example Verification Review the Ansible task output displaying the current list of hosts in the cluster: Example 16.5. Setting configuration options using the ceph_config module As a storage administrator, you can set or get Red Hat Ceph Storage configuration options using the ceph_config module. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with configuration changes: Syntax Example In this example, the playbook first sets the mon_allow_pool_delete option to false . The playbook then gets the current mon_allow_pool_delete setting and displays the value in the Ansible output. Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Example Additional Resources See the Red Hat Ceph Storage Configuration Guide for more details on configuration options. 16.6. Applying a service specification using the ceph_orch_apply module As a storage administrator, you can apply service specifications to your storage cluster using the ceph_orch_apply module in your Ansible playbooks. A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. You can use a service specification to deploy Ceph service types like mon , crash , mds , mgr , osd , rdb , or rbd-mirror . Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with the service specifications: Syntax Example In this example, the playbook deploys the Ceph OSD service on all hosts with the label osd . Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Additional Resources See the Red Hat Ceph Storage Operations Guide for more details on service specification options. 16.7. Managing Ceph daemon states using the ceph_orch_daemon module As a storage administrator, you can start, stop, and restart Ceph daemons on hosts using the ceph_orch_daemon module in your Ansible playbooks. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with daemon state changes: Syntax Example In this example, the playbook starts the OSD with an ID of 0 and stops a Ceph Monitor with an id of host02 . Run the playbook: Syntax Example Verification Review the output from the playbook tasks.
[ "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR", "[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml", "TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml", "TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :", "[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE", "[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/managing-a-red-hat-ceph-storage-cluster-using-cephadm-ansible-modules
10.3. Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli
10.3. Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli To view the available interfaces on the system, issue a command as follows: Note that the NAME field in the output always denotes the connection ID. It is not the interface name even though it might look the same. The ID can be used in nmcli connection commands to identify a connection. Use the DEVICE name with other applications such as firewalld . To create an 802.1Q VLAN interface on Ethernet interface enp1s0 , with VLAN interface VLAN10 and ID 10 , issue a command as follows: Note that as no con-name was given for the VLAN interface, the name was derived from the interface name by prepending the type. Alternatively, specify a name with the con-name option as follows: Assigning Addresses to VLAN Interfaces You can use the same nmcli commands to assign static and dynamic interface addresses as with any other interface. For example, a command to create a VLAN interface with a static IPv4 address and gateway is as follows: To create a VLAN interface with dynamically assigned addressing, issue a command as follows: See Section 3.3.6, "Connecting to a Network Using nmcli" for examples of using nmcli commands to configure interfaces. To review the VLAN interfaces created, issue a command as follows: To view detailed information about the newly configured connection, issue a command as follows: Further options for the VLAN command are listed in the VLAN section of the nmcli(1) man page. In the man pages the device on which the VLAN is created is referred to as the parent device. In the example above the device was specified by its interface name, enp1s0 , it can also be specified by the connection UUID or MAC address. To create an 802.1Q VLAN connection profile with ingress priority mapping on Ethernet interface enp2s0 , with name VLAN1 and ID 13 , issue a command as follows: To view all the parameters associated with the VLAN created above, issue a command as follows: To change the MTU, issue a command as follows: The MTU setting determines the maximum size of the network layer packet. The maximum size of the payload the link-layer frame can carry in turn limits the network layer MTU. For standard Ethernet frames this means an MTU of 1500 bytes. It should not be necessary to change the MTU when setting up a VLAN as the link-layer header is increased in size by 4 bytes to accommodate the 802.1Q tag. At time of writing, connection.interface-name and vlan.interface-name have to be the same (if they are set). They must therefore be changed simultaneously using nmcli 's interactive mode. To change a VLAN connections name, issue commands as follows: The nmcli utility can be used to set and clear ioctl flags which change the way the 802.1Q code functions. The following VLAN flags are supported by NetworkManager : 0x01 - reordering of output packet headers 0x02 - use GVRP protocol 0x04 - loose binding of the interface and its master The state of the VLAN is synchronized to the state of the parent or master interface (the interface or device on which the VLAN is created). If the parent interface is set to the " down " administrative state then all associated VLANs are set down and all routes are flushed from the routing table. Flag 0x04 enables a loose binding mode, in which only the operational state is passed from the parent to the associated VLANs, but the VLAN device state is not changed. To set a VLAN flag, issue a command as follows: See Section 3.3, "Configuring IP Networking with nmcli" for an introduction to nmcli .
[ "~]USD nmcli con show NAME UUID TYPE DEVICE System enp2s0 9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04 802-3-ethernet enp2s0 System enp1s0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet enp1s0", "~]USD nmcli con add type vlan ifname VLAN10 dev enp1s0 id 10 Connection 'vlan-VLAN10' (37750b4a-8ef5-40e6-be9b-4fb21a4b6d17) successfully added.", "~]USD nmcli con add type vlan con-name VLAN12 dev enp1s0 id 12 Connection 'VLAN12' (b796c16a-9f5f-441c-835c-f594d40e6533) successfully added.", "~]USD nmcli con add type vlan con-name VLAN20 dev enp1s0 id 20 ip4 10.10.10.10/24 gw4 10.10.10.254", "~]USD nmcli con add type vlan con-name VLAN30 dev enp1s0 id 30", "~]USD nmcli con show NAME UUID TYPE DEVICE VLAN12 4129a37d-4feb-4be5-ac17-14a193821755 vlan enp1s0.12 System enp2s0 9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04 802-3-ethernet enp2s0 System enp1s0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet enp1s0 vlan-VLAN10 1be91581-11c2-461a-b40d-893d42fed4f4 vlan VLAN10", "~]USD nmcli -p con show VLAN12 =============================================================================== Connection profile details (VLAN12) =============================================================================== connection.id: VLAN12 connection.uuid: 4129a37d-4feb-4be5-ac17-14a193821755 connection.interface-name: -- connection.type: vlan connection.autoconnect: yes ... ------------------------------------------------------------------------------- 802-3-ethernet.port: -- 802-3-ethernet.speed: 0 802-3-ethernet.duplex: -- 802-3-ethernet.auto-negotiate: yes 802-3-ethernet.mac-address: -- 802-3-ethernet.cloned-mac-address: -- 802-3-ethernet.mac-address-blacklist: 802-3-ethernet.mtu: auto ... vlan.interface-name: -- vlan.parent: enp1s0 vlan.id: 12 vlan.flags: 0 (NONE) vlan.ingress-priority-map: vlan.egress-priority-map: ------------------------------------------------------------------------------- =============================================================================== Activate connection details (4129a37d-4feb-4be5-ac17-14a193821755) =============================================================================== GENERAL.NAME: VLAN12 GENERAL.UUID: 4129a37d-4feb-4be5-ac17-14a193821755 GENERAL.DEVICES: enp1s0.12 GENERAL.STATE: activating [output truncated]", "~]USD nmcli con add type vlan con-name VLAN1 dev enp2s0 id 13 ingress \"2:3,3:5\"", "~]USD nmcli connection show vlan-VLAN10", "~]USD nmcli connection modify vlan-VLAN10 802.mtu 1496", "~]USD nmcli con edit vlan-VLAN10 nmcli> set vlan.interface-name superVLAN nmcli> set connection.interface-name superVLAN nmcli> save nmcli> quit", "~]USD nmcli connection modify vlan-VLAN10 vlan.flags 1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configure_802_1Q_VLAN_Tagging_Using_the_Command_Line_Tool_nmcli
Chapter 4. Modifying a compute machine set
Chapter 4. Modifying a compute machine set You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage. Note If you need to scale a compute machine set without making other changes, see Manually scaling a compute machine set . 4.1. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Edit a compute machine set by running the following command: USD oc edit machinesets.machine.openshift.io <machine_set_name> \ -n openshift-machine-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine.machine.openshift.io/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine.machine.openshift.io <machine_name_updated_1> \ -n openshift-machine-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s Additional resources Lifecycle hooks for the machine deletion phase Scaling a compute machine set manually Controlling pod placement using the scheduler
[ "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/modifying-machineset
14.6. Configuring the radvd daemon for IPv6 routers
14.6. Configuring the radvd daemon for IPv6 routers The router advertisement daemon ( radvd ) sends router advertisement messages which are required for IPv6 stateless autoconfiguration. This allows users to automatically configure their addresses, settings, routes and choose a default router based on these advertisements. To configure the radvd daemon: Install the radvd daemon: Set up the /etc/radvd.conf file. For example: Note If you want to additionally advertise DNS resolvers along with the router advertisements, add the RDNSS <ip> <ip> <ip> { }; option in the /etc/radvd.conf file. To configure a DHCPv6 service for your subnets, you can set the AdvManagedFlag to on , so the router advertisements allow clients to automatically obtain an IPv6 address when a DHCPv6 service is available. For more details on configuring the DHCPv6 service, see Section 14.5, "DHCP for IPv6 (DHCPv6)" Enable the radvd daemon: Start the radvd daemon immediately: To display the content of router advertisement packages and the configured values sent by the radvd daemon, use the radvdump command: For more information on the radvd daemon, see the radvd(8) , radvd.conf(5) , radvdump(8) man pages.
[ "~]# sudo yum install radvd", "interface enp1s0 { AdvSendAdvert on; MinRtrAdvInterval 30; MaxRtrAdvInterval 100; prefix 2001:db8:1:0::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; };", "~]# sudo systemctl enable radvd.service", "~]# sudo systemctl start radvd.service", "~]# radvdump Router advertisement from fe80::280:c8ff:feb9:cef9 (hoplimit 255) AdvCurHopLimit: 64 AdvManagedFlag: off AdvOtherConfigFlag: off AdvHomeAgentFlag: off AdvReachableTime: 0 AdvRetransTimer: 0 Prefix 2002:0102:0304:f101::/64 AdvValidLifetime: 30 AdvPreferredLifetime: 20 AdvOnLink: off AdvAutonomous: on AdvRouterAddr: on Prefix 2001:0db8:100:f101::/64 AdvValidLifetime: 2592000 AdvPreferredLifetime: 604800 AdvOnLink: on AdvAutonomous: on AdvRouterAddr: on AdvSourceLLAddress: 00 80 12 34 56 78" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_the_radvd_daemon_for_ipv6_routers