title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server You can remove a cluster that you deployed to IBM Power(R) Virtual Server. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IBMCLOUD_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. You might have to run the openshift-install destroy command up to three times to ensure a proper cleanup. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_power_virtual_server/uninstalling-cluster-ibm-power-vs
Chapter 4. Using Samba for Active Directory Integration
Chapter 4. Using Samba for Active Directory Integration Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. You can use Samba to authenticate Active Directory (AD) domain users to a Domain Controller (DC). Additionally, you can use Samba to share printers and local directories to other SMB clients in the network. 4.1. Using winbindd to Authenticate Domain Users Samba's winbindd service provides an interface for the Name Service Switch (NSS) and enables domain users to authenticate to AD when logging into the local system. Using winbindd provides the benefit that you can enhance the configuration to share directories and printers without installing additional software. For further detail, see the section about Samba in the Red Hat System Administrator's Guide . 4.1.1. Joining an AD Domain If you want to join an AD domain and use the Winbind service, use the realm join --client-software=winbind domain_name command. The realm utility automatically updates the configuration files, such as those for Samba, Kerberos, and PAM. For further details and examples, see the Setting up Samba as a Domain Member section in the Red Hat System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/winbind
Chapter 129. KafkaBridgeConsumerSpec schema reference
Chapter 129. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeConsumerSpec schema properties Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers group.id sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true # ... Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 129.1. KafkaBridgeConsumerSpec schema properties Property Property type Description config map The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeConsumerSpec-reference
2.2.9. Ruby
2.2.9. Ruby The ruby package provides the Ruby interpreter and adds support for the Ruby programming language. The ruby-devel package contains the libraries and header files required for developing Ruby extensions. Red Hat Enterprise Linux also ships with numerous ruby -related packages. By convention, the names of these packages have a ruby or rubygem prefix or suffix. Such packages are either library extensions or Ruby bindings to an existing library. Examples of ruby -related packages include: ruby-flexmock rubygem-flexmock rubygems ruby-irb ruby-libguestfs ruby-libs ruby-qpid ruby-rdoc ruby-ri ruby-saslwrapper ruby-static ruby-tcltk For information about updates to the Ruby language in Red Hat Enterprise Linux 6, see the following resources: file:///usr/share/doc/ruby- version /NEWS file:///usr/share/doc/ruby- version /NEWS- version 2.2.9.1. Ruby Documentation For more information about Ruby, see man ruby . You can also install ruby-docs , which provides HTML manuals and references in the following location: file:///usr/share/doc/ruby-docs- version / The main site for the development of Ruby is hosted on http://www.ruby-lang.org . The http://www.ruby-doc.org site also contains Ruby documentation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/libraries.ruby
Chapter 1. Overview
Chapter 1. Overview AMQ Core Protocol JMS is a Java Message Service (JMS) 2.0 client for use in messaging applications that send and receive Artemis Core Protocol messages. AMQ Core Protocol JMS is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.8 Release Notes . AMQ Core Protocol JMS is based on the JMS implementation from Apache ActiveMQ Artemis . For more information about the JMS API, see the JMS API reference and the JMS tutorial . 1.1. Key features JMS 1.1 and 2.0 compatible SSL/TLS for secure communication Automatic reconnect and failover Distributed transactions (XA) Pure-Java implementation 1.2. Supported standards and protocols AMQ Core Protocol JMS supports the following industry-recognized standards and network protocols: Version 2.0 of the Java Message Service API Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Modern TCP with IPv6 1.3. Supported configurations AMQ Core Protocol JMS supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with the following JDKs: OpenJDK 8 and 11 Oracle JDK 8 IBM JDK 8 Red Hat Enterprise Linux 6 with the following JDKs: OpenJDK 8 Oracle JDK 8 IBM AIX 7.1 with IBM JDK 8 Microsoft Windows 10 Pro with Oracle JDK 8 Microsoft Windows Server 2012 R2 and 2016 with Oracle JDK 8 Oracle Solaris 10 and 11 with Oracle JDK 8 AMQ Core Protocol JMS is supported in combination with the latest version of AMQ Broker. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description ConnectionFactory An entry point for creating connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for producing and consuming messages. It contains message producers and consumers. MessageProducer A channel for sending messages to a destination. It has a target destination. MessageConsumer A channel for receiving messages from a destination. It has a source destination. Destination A named location for messages, either a queue or a topic. Queue A stored sequence of messages. Topic A stored sequence of messages for multicast distribution. Message An application-specific piece of information. AMQ Core Protocol JMS sends and receives messages . Messages are transferred between connected peers using message producers and consumers . Producers and consumers are established over sessions . Sessions are established over connections . Connections are created by connection factories . A sending peer creates a producer to send messages. The producer has a destination that identifies a target queue or topic at the remote peer. A receiving peer creates a consumer to receive messages. Like the producer, the consumer has a destination that identifies a source queue or topic at the remote peer. A destination is either a queue or a topic . In JMS, queues and topics are client-side representations of named broker entities that hold messages. A queue implements point-to-point semantics. Each message is seen by only one consumer, and the message is removed from the queue after it is read. A topic implements publish-subscribe semantics. Each message is seen by multiple consumers, and the message remains available to other consumers after it is read. See the JMS tutorial for more information. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_core_protocol_jms_client/overview
Chapter 7. Database servers
Chapter 7. Database servers 7.1. Introduction to database servers A database server is a service that provides features of a database management system (DBMS). DBMS provides utilities for database administration and interacts with end users, applications, and databases. Red Hat Enterprise Linux 8 provides the following database management systems: MariaDB 10.3 MariaDB 10.5 - available since RHEL 8.4 MariaDB 10.11 - available since RHEL 8.10 MySQL 8.0 PostgreSQL 10 PostgreSQL 9.6 PostgreSQL 12 - available since RHEL 8.1.1 PostgreSQL 13 - available since RHEL 8.4 PostgreSQL 15 - available since RHEL 8.8 PostgreSQL 16 - available since RHEL 8.10 7.2. Using MariaDB The MariaDB server is an open source fast and robust database server that is based on the MySQL technology. MariaDB is a relational database that converts data into structured information and provides an SQL interface for accessing data. It includes multiple storage engines and plugins, as well as geographic information system (GIS) and JavaScript Object Notation (JSON) features. Learn how to install and configure MariaDB on a RHEL system, how to back up MariaDB data, how to migrate from an earlier MariaDB version, and how to replicate a database using the MariaDB Galera Cluster . 7.2.1. Installing MariaDB In RHEL 8, the MariaDB server is available in the following versions, each provided by a separate stream: MariaDB 10.3 MariaDB 10.5 - available since RHEL 8.4 MariaDB 10.11 - available since RHEL 8.10 Note By design, it is impossible to install more than one version (stream) of the same module in parallel. Therefore, you must choose only one of the available streams from the mariadb module. You can use different versions of the MariaDB database server in containers, see Running multiple MariaDB versions in containers . The MariaDB and MySQL database servers cannot be installed in parallel in RHEL 8 due to conflicting RPM packages. You can use the MariaDB and MySQL database servers in parallel in containers, see Running multiple MySQL and MariaDB versions in containers . To install MariaDB , use the following procedure. Procedure Install MariaDB server packages by selecting a stream (version) from the mariadb module and specifying the server profile. For example: Start the mariadb service: Enable the mariadb service to start at boot: Recommended for MariaDB 10.3: To improve security when installing MariaDB , run the following command: The command launches a fully interactive script, which prompts for each step in the process. The script enables you to improve security in the following ways: Setting a password for root accounts Removing anonymous users Disallowing remote root logins (outside the local host) Note The mysql_secure_installation script is no longer valuable in MariaDB 10.5 or later. The security enhancements are part of the default behavior since MariaDB 10.5 . Important If you want to upgrade from an earlier mariadb stream within RHEL 8, follow both procedures described in Switching to a later stream and in Upgrading from MariaDB 10.3 to MariaDB 10.5 or in Upgrading from MariaDB 10.5 to MariaDB 10.11 . 7.2.2. Running multiple MariaDB versions in containers To run different versions of MariaDB on the same host, run them in containers because you cannot install multiple versions (streams) of the same module in parallel. Prerequisites The container-tools module is installed. Procedure Use your Red Hat Customer Portal account to authenticate to the registry.redhat.io registry: Skip this step if you are already logged in to the container registry. Run MariaDB 10.3 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run MariaDB 10.5 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run MariaDB 10.11 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Note The container names and host ports of the two database servers must differ. To ensure that clients can access the database server on the network, open the host ports in the firewall: Verification Display information about running containers: Connect to the database server and log in as root: Additional resources Building, running, and managing containers Browse containers in the Red Hat Ecosystem Catalog 7.2.3. Configuring MariaDB To configure the MariaDB server for networking, use the following procedure. Procedure Edit the [mysqld] section of the /etc/my.cnf.d/mariadb-server.cnf file. You can set the following configuration directives: bind-address - is the address on which the server listens. Possible options are: a host name an IPv4 address an IPv6 address skip-networking - controls whether the server listens for TCP/IP connections. Possible values are: 0 - to listen for all clients 1 - to listen for local clients only port - the port on which MariaDB listens for TCP/IP connections. Restart the mariadb service: 7.2.4. Setting up TLS encryption on a MariaDB server By default, MariaDB uses unencrypted connections. For secure connections, enable TLS support on the MariaDB server and configure your clients to establish encrypted connections. 7.2.4.1. Placing the CA certificate, server certificate, and private key on the MariaDB server Before you can enable TLS encryption in the MariaDB server, store the certificate authority (CA) certificate, the server certificate, and the private key on the MariaDB server. Prerequisites The following files in Privacy Enhanced Mail (PEM) format have been copied to the server: The private key of the server: server.example.com.key.pem The server certificate: server.example.com.crt.pem The Certificate Authority (CA) certificate: ca.crt.pem For details about creating a private key and certificate signing request (CSR), as well as about requesting a certificate from a CA, see your CA's documentation. Procedure Store the CA and server certificates in the /etc/pki/tls/certs/ directory: Set permissions on the CA and server certificate that enable the MariaDB server to read the files: Because certificates are part of the communication before a secure connection is established, any client can retrieve them without authentication. Therefore, you do not need to set strict permissions on the CA and server certificate files. Store the server's private key in the /etc/pki/tls/private/ directory: Set secure permissions on the server's private key: If unauthorized users have access to the private key, connections to the MariaDB server are no longer secure. Restore the SELinux context: 7.2.4.2. Configuring TLS on a MariaDB server To improve security, enable TLS support on the MariaDB server. As a result, clients can transmit data with the server using TLS encryption. Prerequisites You installed the MariaDB server. The mariadb service is running. The following files in Privacy Enhanced Mail (PEM) format exist on the server and are readable by the mysql user: The private key of the server: /etc/pki/tls/private/server.example.com.key.pem The server certificate: /etc/pki/tls/certs/server.example.com.crt.pem The Certificate Authority (CA) certificate /etc/pki/tls/certs/ca.crt.pem The subject distinguished name (DN) or the subject alternative name (SAN) field in the server certificate matches the server's hostname. Procedure Create the /etc/my.cnf.d/mariadb-server-tls.cnf file: Add the following content to configure the paths to the private key, server and CA certificate: If you have a Certificate Revocation List (CRL), configure the MariaDB server to use it: Optional: If you run MariaDB 10.5.2 or later, you can reject connection attempts without encryption. To enable this feature, append: Optional: If you run MariaDB 10.4.6 or later, you can set the TLS versions the server should support. For example, to support TLS 1.2 and TLS 1.3, append: By default, the server supports TLS 1.1, TLS 1.2, and TLS 1.3. Restart the mariadb service: Verification To simplify troubleshooting, perform the following steps on the MariaDB server before you configure the local client to use TLS encryption: Verify that MariaDB now has TLS encryption enabled: If the have_ssl variable is set to yes , TLS encryption is enabled. If you configured the MariaDB service to only support specific TLS versions, display the tls_version variable: Additional resources Placing the CA certificate, server certificate, and private key on the MariaDB server Upgrading from MariaDB 10.3 to MariaDB 10.5 Upgrading from MariaDB 10.5 to MariaDB 10.11 7.2.4.3. Requiring TLS encrypted connections for specific user accounts Users that have access to sensitive data should always use a TLS-encrypted connection to avoid sending data unencrypted over the network. If you cannot configure on the server that a secure transport is required for all connections ( require_secure_transport = on ), configure individual user accounts to require TLS encryption. Prerequisites The MariaDB server has TLS support enabled. The user you configure to require secure transport exists. The client trusts the CA certificate that issued the server's certificate. Procedure Connect as an administrative user to the MariaDB server: If your administrative user has no permissions to access the server remotely, perform the command on the MariaDB server and connect to localhost . Use the REQUIRE SSL clause to enforce that a user must connect using a TLS-encrypted connection: Verification Connect to the server as the example user using TLS encryption: If no error is shown and you have access to the interactive MariaDB console, the connection with TLS succeeds. Attempt to connect as the example user with TLS disabled: The server rejected the login attempt because TLS is required for this user but disabled ( --skip-ssl ). Additional resources Setting up TLS encryption on a MariaDB server 7.2.5. Globally enabling TLS encryption in MariaDB clients If your MariaDB server supports TLS encryption, configure your clients to establish only secure connections and to verify the server certificate. This procedure describes how to enable TLS support for all users on the server. 7.2.5.1. Configuring the MariaDB client to use TLS encryption by default On RHEL, you can globally configure that the MariaDB client uses TLS encryption and verifies that the Common Name (CN) in the server certificate matches the hostname the user connects to. This prevents man-in-the-middle attacks. Prerequisites The MariaDB server has TLS support enabled. If the certificate authority (CA) that issued the server's certificate is not trusted by RHEL, the CA certificate has been copied to the client. Procedure If RHEL does not trust the CA that issued the server's certificate: Copy the CA certificate to the /etc/pki/ca-trust/source/anchors/ directory: Set permissions that enable all users to read the CA certificate file: Rebuild the CA trust database: Create the /etc/my.cnf.d/mariadb-client-tls.cnf file with the following content: These settings define that the MariaDB client uses TLS encryption ( ssl ) and that the client compares the hostname with the CN in the server certificate ( ssl-verify-server-cert ). Verification Connect to the server using the hostname, and display the server status: If the SSL entry contains Cipher in use is... , the connection is encrypted. Note that the user you use in this command has permissions to authenticate remotely. If the hostname you connect to does not match the hostname in the TLS certificate of the server, the ssl-verify-server-cert parameter causes the connection to fail. For example, if you connect to localhost : Additional resources The --ssl* parameter descriptions in the mysql(1) man page on your system 7.2.6. Backing up MariaDB data There are two main ways to back up data from a MariaDB database: Logical backup A logical backup consists of the SQL statements necessary to restore the data. This type of backup exports information and records in plain text files. The main advantage of logical backup over physical backup is portability and flexibility. The data can be restored on other hardware configurations, MariaDB versions or Database Management System (DBMS), which is not possible with physical backups. Note that a logical backup can only be performed if the mariadb.service is running. Logical backup does not include log and configuration files. Physical backup A physical backup consists of copies of files and directories that store the content. Physical backup has the following advantages compared to logical backup: Output is more compact. Backup is smaller in size. Backup and restore are faster. Backup includes log and configuration files. Note that physical backup must be performed when the mariadb.service is not running or all tables in the database are locked to prevent changes during the backup. You can use one of the following MariaDB backup approaches to back up data from a MariaDB database: Logical backup with mysqldump Physical online backup using the Mariabackup utility File system backup Replication as a backup solution 7.2.6.1. Performing logical backup with mysqldump The mysqldump client is a backup utility, which can be used to dump a database or a collection of databases for the purpose of a backup or transfer to another database server. The output of mysqldump typically consists of SQL statements to re-create the server table structure, populate it with data, or both. mysqldump can also generate files in other formats, including XML and delimited text formats, such as CSV. To perform the mysqldump backup, you can use one of the following options: Back up one or more selected databases Back up all databases Back up a subset of tables from one database Procedure To dump a single database, run: To dump multiple databases at once, run: To dump all databases, run: To load one or more dumped full databases back into a server, run: To load a database to a remote MariaDB server, run: To dump a subset of tables from one database, add a list of the chosen tables at the end of the mysqldump command: To load a subset of tables dumped from one database, run: Note The db_name database must exist at this point. To see a list of the options that mysqldump supports, run: Additional resources For more information about logical backup with mysqldump , see the MariaDB Documentation . 7.2.6.2. Performing physical online backup using the Mariabackup utility Mariabackup is a utility based on the Percona XtraBackup technology, which enables performing physical online backups of InnoDB, Aria, and MyISAM tables. This utility is provided by the mariadb-backup package from the AppStream repository. Mariabackup supports full backup capability for MariaDB server, which includes encrypted and compressed data. Prerequisites The mariadb-backup package is installed on the system: You must provide Mariabackup with credentials for the user under which the backup will be run. You can provide the credentials either on the command line or by a configuration file. Users of Mariabackup must have the RELOAD , LOCK TABLES , and REPLICATION CLIENT privileges. To create a backup of a database using Mariabackup , use the following procedure. Procedure To create a backup while providing credentials on the command line, run: The target-dir option defines the directory where the backup files are stored. If you want to perform a full backup, the target directory must be empty or not exist. The user and password options allow you to configure the user name and the password. To create a backup with credentials set in a configuration file: Create a configuration file in the /etc/my.cnf.d/ directory, for example, /etc/my.cnf.d/mariabackup.cnf . Add the following lines into the [xtrabackup] or [mysqld] section of the new file: Perform the backup: Additional resources Full Backup and Restore with Mariabackup 7.2.6.3. Restoring data using the Mariabackup utility When the backup is complete, you can restore the data from the backup by using the mariabackup command with one of the following options: --copy-back allows you to keep the original backup files. --move-back moves the backup files to the data directory and removes the original backup files. To restore data using the Mariabackup utility, use the following procedure. Prerequisites Verify that the mariadb service is not running: Verify that the data directory is empty. Users of Mariabackup must have the RELOAD , LOCK TABLES , and REPLICATION CLIENT privileges. Procedure Run the mariabackup command: To restore data and keep the original backup files, use the --copy-back option: To restore data and remove the original backup files, use the --move-back option: Fix the file permissions. When restoring a database, Mariabackup preserves the file and directory privileges of the backup. However, Mariabackup writes the files to disk as the user and group restoring the database. After restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB server, typically mysql for both. For example, to recursively change ownership of the files to the mysql user and group: Start the mariadb service: Additional resources Full Backup and Restore with Mariabackup 7.2.6.4. Performing file system backup To create a file system backup of MariaDB data files, copy the content of the MariaDB data directory to your backup location. To back up also your current configuration or the log files, use the optional steps of the following procedure. Procedure Stop the mariadb service: Copy the data files to the required location: Optional: Copy the configuration files to the required location: Optional: Copy the log files to the required location: Start the mariadb service: When loading the backed up data from the backup location to the /var/lib/mysql directory, ensure that mysql:mysql is an owner of all data in /var/lib/mysql : 7.2.6.5. Replication as a backup solution Replication is an alternative backup solution for source servers. If a source server replicates to a replica server, backups can be run on the replica without any impact on the source. The source can still run while you shut down the replica and back the data up from the replica. Warning Replication itself is not a sufficient backup solution. Replication protects source servers against hardware failures, but it does not ensure protection against data loss. It is recommended that you use any other backup solution on the replica together with this method. Additional resources Replicating MariaDB with Galera Replication as a backup solution 7.2.7. Migrating to MariaDB 10.3 RHEL 7 contains MariaDB 5.5 as the default implementation of a server from the MySQL databases family. Later versions of the MariaDB database server are available as Software Collections for RHEL 7. RHEL 8 provides MariaDB 10.3 , MariaDB 10.5 , MariaDB 10.11 , and MySQL 8.0 . This part describes migration to MariaDB 10.3 from a RHEL 7 or Red Hat Software Collections version of MariaDB . If you want to migrate from MariaDB 10.3 to MariaDB 10.5 within RHEL 8, see Upgrading from MariaDB 10.3 to MariaDB 10.5 instead. If you want to migrate from MariaDB 10.5 to MariaDB 10.11 within RHEL 8, see Upgrading from MariaDB 10.5 to MariaDB 10.11 . 7.2.7.1. Notable differences between the RHEL 7 and RHEL 8 versions of MariaDB The most important changes between MariaDB 5.5 and MariaDB 10.3 are: MariaDB Galera Cluster , a synchronous multi-source cluster, is a standard part of MariaDB since 10.1. The ARCHIVE storage engine is no longer enabled by default, and the plugin needs to be specifically enabled. The BLACKHOLE storage engine is no longer enabled by default, and the plugin needs to be specifically enabled. InnoDB is used as the default storage engine instead of XtraDB, which was used in MariaDB 10.1 and earlier versions. For more details, see Why does MariaDB 10.2 use InnoDB instead of XtraDB? . The new mariadb-connector-c packages provide a common client library for MySQL and MariaDB. This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with Red Hat Enterprise Linux 8. To migrate from MariaDB 5.5 to MariaDB 10.3 , you need to perform multiple configuration changes . 7.2.7.2. Configuration changes The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. The main advantage of upgrading one minor version at a time is better adaptation of the database, including both data and configuration, to the changes. The upgrade ends on the same major version as is available in RHEL 8 (MariaDB 10.3), which significantly reduces configuration changes or other issues. For more information about configuration changes when migrating from MariaDB 5.5 to MariaDB 10.0 , see Migrating to MariaDB 10.0 in Red Hat Software Collections documentation. The migration to following successive versions of MariaDB and the required configuration changes is described in these documents: Migrating to MariaDB 10.1 in Red Hat Software Collections documentation. Migrating to MariaDB 10.2 in Red Hat Software Collections documentation. Migrating to MariaDB 10.3 in Red Hat Software Collections documentation. Note Migration directly from MariaDB 5.5 to MariaDB 10.3 is also possible, but you must perform all configuration changes that are required by differences described in the migration documents above. 7.2.7.3. In-place upgrade using the mysql_upgrade utility To migrate the database files to RHEL 8, users of MariaDB on RHEL 7 must perform the in-place upgrade using the mysql_upgrade utility. The mysql_upgrade utility is provided by the mariadb-server-utils subpackage, which is installed as a dependency of the mariadb-server package. To perform an in-place upgrade, you must copy binary data files to the /var/lib/mysql/ data directory on the RHEL 8 system and use the mysql_upgrade utility. You can use this method for migrating data from: The Red Hat Enterprise Linux 7 version of MariaDB 5.5 The Red Hat Software Collections versions of: MariaDB 5.5 (no longer supported) MariaDB 10.0 (no longer supported) MariaDB 10.1 (no longer supported) MariaDB 10.2 (no longer supported) MariaDB 10.3 (no longer supported) Note that it is recommended to upgrade to MariaDB 10.3 by one version successively. See the respective Migration chapters in the Release Notes for Red Hat Software Collections . Note If you are upgrading from the RHEL 7 version of MariaDB , the source data is stored in the /var/lib/mysql/ directory. In case of Red Hat Software Collections versions of MariaDB , the source data directory is /var/opt/rh/<collection_name>/lib/mysql/ (with the exception of the mariadb55 , which uses the /opt/rh/mariadb55/root/var/lib/mysql/ data directory). To perform an upgrade using the mysql_upgrade utility, use the following procedure. Prerequisites Before performing the upgrade, back up all your data stored in the MariaDB databases. Procedure Ensure that the mariadb-server package is installed on the RHEL 8 system: Ensure that the mariadb service is not running on either of the source and target systems at the time of copying data: Copy the data from the source location to the /var/lib/mysql/ directory on the RHEL 8 target system. Set the appropriate permissions and SELinux context for copied files on the target system: Start the MariaDB server on the target system: Run the mysql_upgrade command to check and repair internal tables: When the upgrade is complete, verify that all configuration files within the /etc/my.cnf.d/ directory include only options valid for MariaDB 10.3 . Important There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.3 Release Notes . 7.2.8. Upgrading from MariaDB 10.3 to MariaDB 10.5 This part describes migration from MariaDB 10.3 to MariaDB 10.5 within RHEL 8. 7.2.8.1. Notable differences between MariaDB 10.3 and MariaDB 10.5 Significant changes between MariaDB 10.3 and MariaDB 10.5 include: MariaDB now uses the unix_socket authentication plugin by default. The plugin enables users to use operating system credentials when connecting to MariaDB through the local UNIX socket file. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaries. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. In parallel replication, the slave_parallel_mode now defaults to optimistic . In the InnoDB storage engine, defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . MariaDB now uses the libedit implementation of the underlying software managing the MariaDB command history (the .mysql_history file) instead of the previously used readline library. This change impacts users working directly with the .mysql_history file. Note that .mysql_history is a file managed by the MariaDB or MySQL applications, and users should not work with the file directly. The human-readable appearance is coincidental. Note To increase security, you can consider not maintaining a history file. To disable the command history recording: Remove the .mysql_history file if it exists. Use either of the following approaches: Set the MYSQL_HISTFILE variable to /dev/null and include this setting in any of your shell's startup files. Change the .mysql_history file to a symbolic link to /dev/null : MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. Changes to the PAM plugin in MariaDB 10.5 include: MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plugin. The PAM plugin version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to use additional PAM modules. The helper binary can be executed only by users in the mysql group. By default, the group contains only the mysql user. Red Hat recommends that administrators do not add more users to the mysql group to prevent password-guessing attacks without throttling or logging through this helper utility. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plugin and its related files have been moved to a new package, mariadb-pam . As a result, no new setuid root binary is introduced on systems that do not use PAM authentication for MariaDB . The mariadb-pam package contains both PAM plugin versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. The mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plugin available in MariaDB 10.5 , install the mariadb-pam package manually. 7.2.8.2. Upgrading from a RHEL 8 version of MariaDB 10.3 to MariaDB 10.5 This procedure describes upgrading from the mariadb:10.3 module stream to the mariadb:10.5 module stream using the yum and mariadb-upgrade utilities. The mariadb-upgrade utility is provided by the mariadb-server-utils subpackage, which is installed as a dependency of the mariadb-server package. Prerequisites Before performing the upgrade, back up all your data stored in the MariaDB databases. Procedure Stop the MariaDB server: Execute the following command to determine if your system is prepared for switching to a later stream: This command must finish with the message Nothing to do. Complete! For more information, see Switching to a later stream . Reset the mariadb module on your system: Enable the new mariadb:10.5 module stream: Synchronize installed packages to perform the change between streams: This will update all installed MariaDB packages. Adjust the configuration so that option files located in /etc/my.cnf.d/ include only options valid for MariaDB 10.5 . For details, see upstream documentation for MariaDB 10.4 and MariaDB 10.5 . Start the MariaDB server. When upgrading a database running standalone: When upgrading a Galera cluster node: The mariadb service will be started automatically. Execute the mariadb-upgrade utility to check and repair internal tables. When upgrading a database running standalone: When upgrading a Galera cluster node: Important There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.5 Release Notes . 7.2.9. Upgrading from MariaDB 10.5 to MariaDB 10.11 This part describes migration from MariaDB 10.5 to MariaDB 10.11 within RHEL 8. 7.2.9.1. Notable differences between MariaDB 10.5 and MariaDB 10.11 Significant changes between MariaDB 10.5 and MariaDB 10.11 include: A new sys_schema feature is a collection of views, functions, and procedures to provide information about database usage. The CREATE TABLE , ALTER TABLE , RENAME TABLE , DROP TABLE , DROP DATABASE , and related Data Definition Language (DDL) statements are now atomic. The statement must be fully completed, otherwise the changes are reverted. Note that when deleting multiple tables with DROP TABLE , only each individual drop is atomic, not the full list of tables. A new GRANT ... TO PUBLIC privilege is available. The SUPER and READ ONLY ADMIN privileges are now separate. You can now store universally unique identifiers in the new UUID database data type. MariaDB now supports the Secure Socket Layer (SSL) protocol version 3. The MariaDB server now requires correctly configured SSL to start. Previously, MariaDB silently disabled SSL and used insecure connections in case of misconfigured SSL. MariaDB now supports the natural sort order through the natural_sort_key() function. A new SFORMAT function is now available for arbitrary text formatting. The utf8 character set (and related collations) is now by default an alias for utf8mb3 . MariaDB supports the Unicode Collation Algorithm (UCA) 14 collations. systemd socket activation files for MariaDB are now available in the /usr/share/ directory. Note that they are not a part of the default configuration in RHEL as opposed to upstream. Error messages now contain the MariaDB string instead of MySQL . Error messages are now available in the Chinese language. The default logrotate file has changed significantly. Review your configuration before migrating to MariaDB 10.11 . For MariaDB and MySQL clients, the connection property specified on the command line (for example, --port=3306 ), now forces the protocol type of communication between the client and the server, such as tcp , socket , pipe , or memory . Previously, for example, the specified port was ignored if a MariaDB client connected through a UNIX socket. 7.2.9.2. Upgrading from a RHEL 8 version of MariaDB 10.5 to MariaDB 10.11 This procedure describes upgrading from the mariadb:10.5 module stream to the mariadb:10.11 module stream using the yum and mariadb-upgrade utilities. The mariadb-upgrade utility is provided by the mariadb-server-utils subpackage, which is installed as a dependency of the mariadb-server package. Prerequisites Before performing the upgrade, back up all your data stored in the MariaDB databases. Procedure Stop the MariaDB server: Execute the following command to determine if your system is prepared for switching to a later stream: This command must finish with the message Nothing to do. Complete! For more information, see Switching to a later stream . Reset the mariadb module on your system: Enable the new mariadb:10.11 module stream: Synchronize installed packages to perform the change between streams: This will update all installed MariaDB packages. Adjust the configuration so that option files located in /etc/my.cnf.d/ include only options valid for MariaDB 10.11 . For details, see upstream documentation for MariaDB 10.6 and MariaDB 10.11 . Start the MariaDB server. When upgrading a database running standalone: When upgrading a Galera cluster node: The mariadb service will be started automatically. Execute the mariadb-upgrade utility to check and repair internal tables. When upgrading a database running standalone: When upgrading a Galera cluster node: Important There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.11 Release Notes . 7.2.10. Replicating MariaDB with Galera This section describes how to replicate a MariaDB database using the Galera solution on Red Hat Enterprise Linux 8. 7.2.10.1. Introduction to MariaDB Galera Cluster Galera replication is based on the creation of a synchronous multi-source MariaDB Galera Cluster consisting of multiple MariaDB servers. Unlike the traditional primary/replica setup where replicas are usually read-only, nodes in the MariaDB Galera Cluster can be all writable. The interface between Galera replication and a MariaDB database is defined by the write set replication API ( wsrep API ). The main features of MariaDB Galera Cluster are: Synchronous replication Active-active multi-source topology Read and write to any cluster node Automatic membership control, failed nodes drop from the cluster Automatic node joining Parallel replication on row level Direct client connections: users can log on to the cluster nodes, and work with the nodes directly while the replication runs Synchronous replication means that a server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client (user application) connects directly to the Database Management System (DBMS), and experiences behavior that is similar to native MariaDB . Synchronous replication guarantees that a change that happened on one node in the cluster happens on other nodes in the cluster at the same time. Therefore, synchronous replication has the following advantages over asynchronous replication: No delay in propagation of the changes between particular cluster nodes All cluster nodes are always consistent The latest changes are not lost if one of the cluster nodes crashes Transactions on all cluster nodes are executed in parallel Causality across the whole cluster Additional resources About Galera replication What is MariaDB Galera Cluster Getting started with MariaDB Galera Cluster 7.2.10.2. Components to build MariaDB Galera Cluster To build MariaDB Galera Cluster , you must install the following packages on your system: mariadb-server-galera - contains support files and scripts for MariaDB Galera Cluster . mariadb-server - is patched by MariaDB upstream to include the write set replication API ( wsrep API ). This API provides the interface between Galera replication and MariaDB . galera - is patched by MariaDB upstream to add full support for MariaDB . The galera package contains the following: Galera Replication Library provides the whole replication functionality. The Galera Arbitrator utility can be used as a cluster member that participates in voting in split-brain scenarios. However, Galera Arbitrator cannot participate in the actual replication. Galera Systemd service and Galera wrapper script which are used for deploying the Galera Arbitrator utility. MariaDB 10.3 , MariaDB 10.5 , and MariaDB 10.11 in RHEL 8 include a Red Hat version of the garbd systemd service and a wrapper script for the galera package in the /usr/lib/systemd/system/garbd.service and /usr/sbin/garbd-wrapper files, respectively. Since RHEL 8.6, MariaDB distributed with RHEL also provides an upstream version of these files located at /usr/share/doc/galera/garb-systemd and /usr/share/doc/galera/garbd.service . Additional resources Galera Replication Library Galera Arbitrator 7.2.10.3. Deploying MariaDB Galera Cluster Prerequisites All of the nodes in the cluster have TLS set up . All certificates on all nodes must have the Extended Key Usage field set to: Procedure Install MariaDB Galera Cluster packages by selecting a stream (version) from the mariadb module and specifying the galera profile. For example: As a result, the following packages are installed: mariadb-server-galera mariadb-server galera The mariadb-server-galera package pulls the mariadb-server and galera packages as its dependency. For more information about which packages you need to install to build MariaDB Galera Cluster , see Components to build MariaDB Cluster . Update the MariaDB server replication configuration before the system is added to a cluster for the first time. The default configuration is distributed in the /etc/my.cnf.d/galera.cnf file. Before deploying MariaDB Galera Cluster , set the wsrep_cluster_address option in the /etc/my.cnf.d/galera.cnf file on all nodes to start with the following string: For the initial node, it is possible to set wsrep_cluster_address as an empty list: For all other nodes, set wsrep_cluster_address to include an address to any node which is already a part of the running cluster. For example: For more information about how to set Galera Cluster address, see Galera Cluster Address . Enable the wsrep API on every node by setting the wsrep_on=1 option in the /etc/my.cnf.d/galera.cnf configuration file. Add the wsrep_provider_options variable to the Galera configuration file with the TLS keys and certificates. For example: Bootstrap a first node of a new cluster by running the following wrapper on that node: This wrapper ensures that the MariaDB server daemon ( mysqld ) runs with the --wsrep-new-cluster option. This option provides the information that there is no existing cluster to connect to. Therefore, the node creates a new UUID to identify the new cluster. Note The mariadb service supports a systemd method for interacting with multiple MariaDB server processes. Therefore, in cases with multiple running MariaDB servers, you can bootstrap a specific instance by specifying the instance name as a suffix: Connect other nodes to the cluster by running the following command on each of the nodes: As a result, the node connects to the cluster, and synchronizes itself with the state of the cluster. Additional resources Getting started with MariaDB Galera Cluster 7.2.10.4. Adding a new node to MariaDB Galera Cluster To add a new node to MariaDB Galera Cluster , use the following procedure. Note that you can also use this procedure to reconnect an already existing node. Procedure On the particular node, provide an address to one or more existing cluster members in the wsrep_cluster_address option within the [mariadb] section of the /etc/my.cnf.d/galera.cnf configuration file : When a new node connects to one of the existing cluster nodes, it is able to see all nodes in the cluster. However, preferably list all nodes of the cluster in wsrep_cluster_address . As a result, any node can join a cluster by connecting to any other cluster node, even if one or more cluster nodes are down. When all members agree on the membership, the cluster's state is changed. If the new node's state is different from the state of the cluster, the new node requests either an Incremental State Transfer (IST) or a State Snapshot Transfer (SST) to ensure consistency with the other nodes. Additional resources Getting started with MariaDB Galera Cluster Introduction to State Snapshot Transfers Securing Communications in Galera Cluster 7.2.10.5. Restarting MariaDB Galera Cluster If you shut down all nodes at the same time, you stop the cluster, and the running cluster no longer exists. However, the cluster's data still exist. To restart the cluster, bootstrap a first node as described in Configuring MariaDB Galera Cluster . Warning If the cluster is not bootstrapped, and mariadb on the first node is started with only the systemctl start mariadb.service command, the node tries to connect to at least one of the nodes listed in the wsrep_cluster_address option in the /etc/my.cnf.d/galera.cnf file. If no nodes are currently running, the restart fails. Additional resources Getting started with MariaDB Galera Cluster 7.2.11. Developing MariaDB client applications Red Hat recommends developing your MariaDB client applications against the MariaDB client library. The development files and programs necessary to build applications against the MariaDB client library are provided by the mariadb-connector-c-devel package. Instead of using a direct library name, use the mariadb_config program, which is distributed in the mariadb-connector-c-devel package. This program ensures that the correct build flags are returned. 7.3. Using MySQL The MySQL server is an open source fast and robust database server. MySQL is a relational database that converts data into structured information and provides an SQL interface for accessing data. It includes multiple storage engines and plugins, as well as geographic information system (GIS) and JavaScript Object Notation (JSON) features. Learn how to install and configure MySQL on a RHEL system, how to back up MySQL data, how to migrate from an earlier MySQL version, and how to replicate a MySQL . 7.3.1. Installing MySQL In RHEL 8, the MySQL 8.0 server is available as the mysql:8.0 module stream. Note The MySQL and MariaDB database servers cannot be installed in parallel in RHEL 8 due to conflicting RPM packages. You can use the MySQL and MariaDB database servers in parallel in containers, see Running multiple MySQL and MariaDB versions in containers . To install MySQL , use the following procedure. Procedure Install MySQL server packages by selecting the 8.0 stream (version) from the mysql module and specifying the server profile: Start the mysqld service: Enable the mysqld service to start at boot: Recommended: To improve security when installing MySQL , run the following command: The command launches a fully interactive script, which prompts for each step in the process. The script enables you to improve security in the following ways: Setting a password for root accounts Removing anonymous users Disallowing remote root logins (outside the local host) 7.3.2. Running multiple MySQL and MariaDB versions in containers To run both MySQL and MariaDB on the same host, run them in containers because you cannot install these database servers in parallel due to conflicting RPM packages. This procedure includes MySQL 8.0 and MariaDB 10.5 as examples but you can use any MySQL or MariaDB container version available in the Red Hat Ecosystem Catalog. Prerequisites The container-tools module is installed. Procedure Use your Red Hat Customer Portal account to authenticate to the registry.redhat.io registry: Skip this step if you are already logged in to the container registry. Run MySQL 8.0 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run MariaDB 10.5 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run MariaDB 10.11 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Note The container names and host ports of the two database servers must differ. To ensure that clients can access the database server on the network, open the host ports in the firewall: Verification Display information about running containers: Connect to the database server and log in as root: Additional resources Building, running, and managing containers Browse containers in the Red Hat Ecosystem Catalog 7.3.3. Configuring MySQL To configure the MySQL server for networking, use the following procedure. Procedure Edit the [mysqld] section of the /etc/my.cnf.d/mysql-server.cnf file. You can set the following configuration directives: bind-address - is the address on which the server listens. Possible options are: a host name an IPv4 address an IPv6 address skip-networking - controls whether the server listens for TCP/IP connections. Possible values are: 0 - to listen for all clients 1 - to listen for local clients only port - the port on which MySQL listens for TCP/IP connections. Restart the mysqld service: 7.3.4. Setting up TLS encryption on a MySQL server By default, MySQL uses unencrypted connections. For secure connections, enable TLS support on the MySQL server and configure your clients to establish encrypted connections. 7.3.4.1. Placing the CA certificate, server certificate, and private key on the MySQL server Before you can enable TLS encryption on the MySQL server, store the certificate authority (CA) certificate, the server certificate, and the private key on the MySQL server. Prerequisites The following files in Privacy Enhanced Mail (PEM) format have been copied to the server: The private key of the server: server.example.com.key.pem The server certificate: server.example.com.crt.pem The Certificate Authority (CA) certificate: ca.crt.pem For details about creating a private key and certificate signing request (CSR), as well as about requesting a certificate from a CA, see your CA's documentation. Procedure Store the CA and server certificates in the /etc/pki/tls/certs/ directory: Set permissions on the CA and server certificate that enable the MySQL server to read the files: Because certificates are part of the communication before a secure connection is established, any client can retrieve them without authentication. Therefore, you do not need to set strict permissions on the CA and server certificate files. Store the server's private key in the /etc/pki/tls/private/ directory: Set secure permissions on the server's private key: If unauthorized users have access to the private key, connections to the MySQL server are no longer secure. Restore the SELinux context: 7.3.4.2. Configuring TLS on a MySQL server To improve security, enable TLS support on the MySQL server. As a result, clients can transmit data with the server using TLS encryption. Prerequisites You installed the MySQL server. The mysqld service is running. The following files in Privacy Enhanced Mail (PEM) format exist on the server and are readable by the mysql user: The private key of the server: /etc/pki/tls/private/server.example.com.key.pem The server certificate: /etc/pki/tls/certs/server.example.com.crt.pem The Certificate Authority (CA) certificate /etc/pki/tls/certs/ca.crt.pem The subject distinguished name (DN) or the subject alternative name (SAN) field in the server certificate matches the server's host name. Procedure Create the /etc/my.cnf.d/mysql-server-tls.cnf file: Add the following content to configure the paths to the private key, server and CA certificate: If you have a Certificate Revocation List (CRL), configure the MySQL server to use it: Optional: Reject connection attempts without encryption. To enable this feature, append: Optional: Set the TLS versions the server should support. For example, to support TLS 1.2 and TLS 1.3, append: By default, the server supports TLS 1.1, TLS 1.2, and TLS 1.3. Restart the mysqld service: Verification To simplify troubleshooting, perform the following steps on the MySQL server before you configure the local client to use TLS encryption: Verify that MySQL now has TLS encryption enabled: If you configured the MySQL server to only support specific TLS versions, display the tls_version variable: Verify that the server uses the correct CA certificate, server certificate, and private key files: Additional resources Placing the CA certificate, server certificate, and private key on the MySQL server 7.3.4.3. Requiring TLS encrypted connections for specific user accounts Users that have access to sensitive data should always use a TLS-encrypted connection to avoid sending data unencrypted over the network. If you cannot configure on the server that a secure transport is required for all connections ( require_secure_transport = on ), configure individual user accounts to require TLS encryption. Prerequisites The MySQL server has TLS support enabled. The user you configure to require secure transport exists. The CA certificate is stored on the client. Procedure Connect as an administrative user to the MySQL server: If your administrative user has no permissions to access the server remotely, perform the command on the MySQL server and connect to localhost . Use the REQUIRE SSL clause to enforce that a user must connect using a TLS-encrypted connection: Verification Connect to the server as the example user using TLS encryption: If no error is shown and you have access to the interactive MySQL console, the connection with TLS succeeds. By default, the client automatically uses TLS encryption if the server provides it. Therefore, the --ssl-ca=ca.crt.pem and --ssl-mode=VERIFY_IDENTITY options are not required, but improve the security because, with these options, the client verifies the identity of the server. Attempt to connect as the example user with TLS disabled: The server rejected the login attempt because TLS is required for this user but disabled ( --ssl-mode=DISABLED ). Additional resources Configuring TLS on a MySQL server 7.3.5. Globally enabling TLS encryption with CA certificate validation in MySQL clients If your MySQL server supports TLS encryption, configure your clients to establish only secure connections and to verify the server certificate. This procedure describes how to enable TLS support for all users on the server. 7.3.5.1. Configuring the MySQL client to use TLS encryption by default On RHEL, you can globally configure that the MySQL client uses TLS encryption and verifies that the Common Name (CN) in the server certificate matches the hostname the user connects to. This prevents man-in-the-middle attacks. Prerequisites The MySQL server has TLS support enabled. The CA certificate is stored in the /etc/pki/tls/certs/ca.crt.pem file on the client. Procedure Create the /etc/my.cnf.d/mysql-client-tls.cnf file with the following content: These settings define that the MySQL client uses TLS encryption and that the client compares the hostname with the CN in the server certificate ( ssl-mode=VERIFY_IDENTITY ). Additionally, it specifies the path to the CA certificate ( ssl-ca ). Verification Connect to the server using the hostname, and display the server status: If the SSL entry contains Cipher in use is... , the connection is encrypted. Note that the user you use in this command has permissions to authenticate remotely. If the hostname you connect to does not match the hostname in the TLS certificate of the server, the ssl-mode=VERIFY_IDENTITY parameter causes the connection to fail. For example, if you connect to localhost : Additional resources The --ssl* parameter descriptions in the mysql(1) man page on your system 7.3.6. Backing up MySQL data There are two main ways to back up data from a MySQL database: Logical backup Logical backup consists of the SQL statements necessary to restore the data. This type of backup exports information and records in plain text files. The main advantage of logical backup over physical backup is portability and flexibility. The data can be restored on other hardware configurations, MySQL versions or Database Management System (DBMS), which is not possible with physical backups. Note that logical backup can only be performed if the mysqld.service is running. Logical backup does not include log and configuration files. Physical backup Physical backup consists of copies of files and directories that store the content. Physical backup has the following advantages compared to logical backup: Output is more compact. Backup is smaller in size. Backup and restore are faster. Backup includes log and configuration files. Note that physical backup must be performed when the mysqld.service is not running or all tables in the database are locked to prevent changes during the backup. You can use one of the following MySQL backup approaches to back up data from a MySQL database: Logical backup with mysqldump File system backup Replication as a backup solution 7.3.6.1. Performing logical backup with mysqldump The mysqldump client is a backup utility, which can be used to dump a database or a collection of databases for the purpose of a backup or transfer to another database server. The output of mysqldump typically consists of SQL statements to re-create the server table structure, populate it with data, or both. mysqldump can also generate files in other formats, including XML and delimited text formats, such as CSV. To perform the mysqldump backup, you can use one of the following options: Back up one or more selected databases Back up all databases Back up a subset of tables from one database Procedure To dump a single database, run: To dump multiple databases at once, run: To dump all databases, run: To load one or more dumped full databases back into a server, run: To load a database to a remote MySQL server, run: To dump a literal,subset of tables from one database, add a list of the chosen tables at the end of the mysqldump command: To load a literal,subset of tables dumped from one database, run: Note The db_name database must exist at this point. To see a list of the options that mysqldump supports, run: Additional resources Logical backup with mysqldump 7.3.6.2. Performing file system backup To create a file system backup of MySQL data files, copy the content of the MySQL data directory to your backup location. To back up also your current configuration or the log files, use the optional steps of the following procedure. Procedure Stop the mysqld service: Copy the data files to the required location: Optional: Copy the configuration files to the required location: Optional: Copy the log files to the required location: Start the mysqld service: When loading the backed up data from the backup location to the /var/lib/mysql directory, ensure that mysql:mysql is an owner of all data in /var/lib/mysql : 7.3.6.3. Replication as a backup solution Replication is an alternative backup solution for source servers. If a source server replicates to a replica server, backups can be run on the replica without any impact on the source. The source can still run while you shut down the replica and back the data up from the replica. For instructions on how to replicate a MySQL database, see Replicating MySQL . Warning Replication itself is not a sufficient backup solution. Replication protects source servers against hardware failures, but it does not ensure protection against data loss. It is recommended that you use any other backup solution on the replica together with this method. Additional resources MySQL replication documentation 7.3.7. Migrating to a RHEL 8 version of MySQL 8.0 RHEL 7 contains MariaDB 5.5 as the default implementation of a server from the MySQL databases family. The Red Hat Software Collections offering for RHEL 7 provides MySQL 8.0 and several versions of MariaDB . RHEL 8 provides MySQL 8.0 , MariaDB 10.3 , and MariaDB 10.5 . This procedure describes migration from a Red Hat Software Collections version of MySQL 8.0 to a RHEL 8 version of MySQL 8.0 using the mysql_upgrade utility. The mysql_upgrade utility is provided by the mysql-server package. Note In the Red Hat Software Collections version of MySQL , the source data directory is /var/opt/rh/rh-mysql80/lib/mysql/ . In RHEL 8, MySQL data is stored in the /var/lib/mysql/ directory. Prerequisites Before performing the upgrade, back up all your data stored in the MySQL databases. Procedure Ensure that the mysql-server package is installed on the RHEL 8 system: Ensure that the mysqld service is not running on either of the source and target systems at the time of copying data: Copy the data from the /var/opt/rh/rh-mysql80/lib/mysql/ directory on the RHEL 7 source system to the /var/lib/mysql/ directory on the RHEL 8 target system. Set the appropriate permissions and SELinux context for copied files on the target system: Ensure that mysql:mysql is an owner of all data in the /var/lib/mysql directory: Start the MySQL server on the target system: Note: In earlier versions of MySQL , the mysql_upgrade command was needed to check and repair internal tables. This is now done automatically when you start the server. 7.3.8. Replicating MySQL with TLS encryption MySQL provides various configuration options for replication, ranging from basic to advanced. This section describes a transaction-based way to replicate in MySQL on freshly installed MySQL servers using global transaction identifiers (GTIDs). Using GTIDs simplifies transaction identification and consistency verification. To set up replication in MySQL , you must: Configure a source server Configure a replica server Create a replication user on the source server Connect the replica server to the source server Important If you want to use existing MySQL servers for replication, you must first synchronize data. See the upstream documentation for more information. 7.3.8.1. Configuring a MySQL source server You can set configuration options required for a MySQL source server to properly run and replicate all changes made on the database server through the TLS protocol. Prerequisites The source server is installed. The source server has TLS set up . Important The source and replica certificates must be signed by the same certificate authority. Procedure Include the following options in the /etc/my.cnf.d/mysql-server.cnf file under the [mysqld] section: bind-address= source_ip_adress This option is required for connections made from replicas to the source. server-id= id The id must be unique. log_bin= path_to_source_server_log This option defines a path to the binary log file of the MySQL source server. For example: log_bin=/var/log/mysql/mysql-bin.log . gtid_mode=ON This option enables global transaction identifiers (GTIDs) on the server. enforce-gtid-consistency=ON The server enforces GTID consistency by allowing execution of only statements that can be safely logged using a GTID. Optional: binlog_do_db= db_name Use this option if you want to replicate only selected databases. To replicate more than one selected database, specify each of the databases separately: Optional: binlog_ignore_db= db_name Use this option to exclude a specific database from replication. Restart the mysqld service: 7.3.8.2. Configuring a MySQL replica server You can set configuration options required for a MySQL replica server to ensure a successful replication. Prerequisites The replica server is installed. The replica server has TLS set up . Important The source and replica certificates must be signed by the same certificate authority. Procedure Include the following options in the /etc/my.cnf.d/mysql-server.cnf file under the [mysqld] section: server-id= id The id must be unique. relay-log= path_to_replica_server_log The relay log is a set of log files created by the MySQL replica server during replication. log_bin= path_to_replica_sever_log This option defines a path to the binary log file of the MySQL replica server. For example: log_bin=/var/log/mysql/mysql-bin.log . This option is not required in a replica but strongly recommended. gtid_mode=ON This option enables global transaction identifiers (GTIDs) on the server. enforce-gtid-consistency=ON The server enforces GTID consistency by allowing execution of only statements that can be safely logged using a GTID. log-replica-updates=ON This option ensures that updates received from the source server are logged in the replica's binary log. skip-replica-start=ON This option ensures that the replica server does not start the replication threads when the replica server starts. Optional: binlog_do_db= db_name Use this option if you want to replicate only certain databases. To replicate more than one database, specify each of the databases separately: Optional: binlog_ignore_db= db_name Use this option to exclude a specific database from replication. Restart the mysqld service: 7.3.8.3. Creating a replication user on the MySQL source server You must create a replication user and grant this user permissions required for replication traffic. This procedure shows how to create a replication user with appropriate permissions. Execute these steps only on the source server. Prerequisites The source server is installed and configured as described in Configuring a MySQL source server . Procedure Create a replication user: Grant the user replication permissions: Reload the grant tables in the MySQL database: Set the source server to read-only state: 7.3.8.4. Connecting the replica server to the source server On the MySQL replica server, you must configure credentials and the address of the source server. Use the following procedure to implement the replica server. Prerequisites The source server is installed and configured as described in Configuring a MySQL source server . The replica server is installed and configured as described in Configuring a MySQL replica server . You have created a replication user. See Creating a replication user on the MySQL source server . Procedure Set the replica server to read-only state: Configure the replication source: Start the replica thread in the MySQL replica server: Unset the read-only state on both the source and replica servers: Optional: Inspect the status of the replica server for debugging purposes: Note If the replica server fails to start or connect, you can skip a certain number of events following the binary log file position displayed in the output of the SHOW MASTER STATUS command. For example, skip the first event from the defined position: and try to start the replica server again. Optional: Stop the replica thread in the replica server: 7.3.8.5. Verifying replication on a MySQL server After you configured replication among multiple MySQL servers, you should verify that that it works. Procedure Create an example database on the source server: Verify that the test_db_name database replicates on the replica server. Display status information about the binary log files of the MySQL server by executing the following command on either of the source or replica servers: The Executed_Gtid_Set column, which shows a set of GTIDs for transactions executed on the source, must not be empty. Note The same set of GTIDs is displayed in the Executed_Gtid_Set row when you use the SHOW REPLICA STATUS on the replica server. 7.3.8.5.1. Additional resources MySQL Replication documentation Replication with Global Transaction Identifiers 7.3.9. Developing MySQL client applications Red Hat recommends developing your MySQL client applications against the MariaDB client library. The communication protocol between client and server is compatible between MariaDB and MySQL . The MariaDB client library works for most common MySQL scenarios, with the exception of a limited number of features specific to the MySQL implementation. The development files and programs necessary to build applications against the MariaDB client library are provided by the mariadb-connector-c-devel package. Instead of using a direct library name, use the mariadb_config program, which is distributed in the mariadb-connector-c-devel package. This program ensures that the correct build flags are returned. 7.4. Using PostgreSQL The PostgreSQL server is an open source robust and highly-extensible database server based on the SQL language. The PostgreSQL server provides an object-relational database system that can manage extensive datasets and a high number of concurrent users. For these reasons, PostgreSQL servers can be used in clusters to manage high amounts of data. The PostgreSQL server includes features for ensuring data integrity, building fault-tolerant environments and applications. With the PostgreSQL server, you can extend a database with your own data types, custom functions, or code from different programming languages without the need to recompile the database. Learn how to install and configure PostgreSQL on a RHEL system, how to back up PostgreSQL data, and how to migrate from an earlier PostgreSQL version. 7.4.1. Installing PostgreSQL In RHEL 8, the PostgreSQL server is available in several versions, each provided by a separate stream: PostgreSQL 10 - the default stream PostgreSQL 9.6 PostgreSQL 12 - available since RHEL 8.1.1 PostgreSQL 13 - available since RHEL 8.4 PostgreSQL 15 - available since RHEL 8.8 PostgreSQL 16 - available since RHEL 8.10 Note By design, it is impossible to install more than one version (stream) of the same module in parallel. Therefore, you must choose only one of the available streams from the postgresql module. You can use different versions of the PostgreSQL database server in containers, see Running multiple PostgreSQL versions in containers . To install PostgreSQL , use the following procedure. Procedure Install the PostgreSQL server packages by selecting a stream (version) from the postgresql module and specifying the server profile. For example: The postgres superuser is created automatically. Initialize the database cluster: Red Hat recommends storing the data in the default /var/lib/pgsql/data directory. Start the postgresql service: Enable the postgresql service to start at boot: For information about using module streams, see Installing, managing, and removing user-space components . Important If you want to upgrade from an earlier postgresql stream within RHEL 8, follow both procedures described in Switching to a later stream and in Migrating to a RHEL 8 version of PostgreSQL . 7.4.2. Running multiple PostgreSQL versions in containers To run different versions of PostgreSQL on the same host, run them in containers because you cannot install multiple versions (streams) of the same module in parallel. This procedure includes PostgreSQL 13 and PostgreSQL 15 as examples but you can use any PostgreSQL container version available in the Red Hat Ecosystem Catalog. Prerequisites The container-tools module is installed. Procedure Use your Red Hat Customer Portal account to authenticate to the registry.redhat.io registry: Skip this step if you are already logged in to the container registry. Run PostgreSQL 13 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run PostgreSQL 15 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Run PostgreSQL 16 in a container: For more information about the usage of this container image, see the Red Hat Ecosystem Catalog . Note The container names and host ports of the two database servers must differ. To ensure that clients can access the database server on the network, open the host ports in the firewall: Verification Display information about running containers: Connect to the database server and log in as root: Additional resources Building, running, and managing containers Browse containers in the Red Hat Ecosystem Catalog 7.4.3. Creating PostgreSQL users PostgreSQL users are of the following types: The postgres UNIX system user - should be used only to run the PostgreSQL server and client applications, such as pg_dump . Do not use the postgres system user for any interactive work on PostgreSQL administration, such as database creation and user management. A database superuser - the default postgres PostgreSQL superuser is not related to the postgres system user. You can limit access of the postgres superuser in the pg_hba.conf file, otherwise no other permission limitations exist. You can also create other database superusers. A role with specific database access permissions: A database user - has a permission to log in by default A group of users - enables managing permissions for the group as a whole Roles can own database objects (for example, tables and functions) and can assign object privileges to other roles using SQL commands. Standard database management privileges include SELECT , INSERT , UPDATE , DELETE , TRUNCATE , REFERENCES , TRIGGER , CREATE , CONNECT , TEMPORARY , EXECUTE , and USAGE . Role attributes are special privileges, such as LOGIN , SUPERUSER , CREATEDB , and CREATEROLE . Important Red Hat recommends performing most tasks as a role that is not a superuser. A common practice is to create a role that has the CREATEDB and CREATEROLE privileges and use this role for all routine management of databases and roles. Prerequisites The PostgreSQL server is installed. The database cluster is initialized. Procedure To create a user, set a password for the user, and assign the user the CREATEROLE and CREATEDB permissions: Replace mydbuser with the username and mypasswd with the user's password. Additional resources PostgreSQL database roles PostgreSQL privileges Configuring PostgreSQL Example 7.1. Initializing, creating, and connecting to a PostgreSQL database This example demonstrates how to initialize a PostgreSQL database, create a database user with routine database management privileges, and how to create a database that is accessible from any system account through the database user with management privileges. Install the PosgreSQL server: Initialize the database cluster: Set the password hashing algorithm to scram-sha-256 . In the /var/lib/pgsql/data/postgresql.conf file, change the following line: to: In the /var/lib/pgsql/data/pg_hba.conf file, change the following line for the IPv4 local connections: to: Start the postgresql service: Log in as the system user named postgres : Start the PostgreSQL interactive terminal: Optional: Obtain information about the current database connection: Create a user named mydbuser , set a password for mydbuser , and assign mydbuser the CREATEROLE and CREATEDB permissions: The mydbuser user now can perform routine database management operations: create databases and manage user indexes. Log out of the interactive terminal by using the \q meta command: Log out of the postgres user session: Log in to the PostgreSQL terminal as mydbuser , specify the hostname, and connect to the default postgres database, which was created during initialization: Create a database named mydatabase : Log out of the session: Connect to mydatabase as mydbuser : Optional: Obtain information about the current database connection: 7.4.4. Configuring PostgreSQL In a PostgreSQL database, all data and configuration files are stored in a single directory called a database cluster. Red Hat recommends storing all data, including configuration files, in the default /var/lib/pgsql/data/ directory. PostgreSQL configuration consists of the following files: postgresql.conf - is used for setting the database cluster parameters. postgresql.auto.conf - holds basic PostgreSQL settings similarly to postgresql.conf . However, this file is under the server control. It is edited by the ALTER SYSTEM queries, and cannot be edited manually. pg_ident.conf - is used for mapping user identities from external authentication mechanisms into the PostgreSQL user identities. pg_hba.conf - is used for configuring client authentication for PostgreSQL databases. To change the PostgreSQL configuration, use the following procedure. Procedure Edit the respective configuration file, for example, /var/lib/pgsql/data/postgresql.conf . Restart the postgresql service so that the changes become effective: Example 7.2. Configuring PostgreSQL database cluster parameters This example shows basic settings of the database cluster parameters in the /var/lib/pgsql/data/postgresql.conf file. Example 7.3. Setting client authentication in PostgreSQL This example demonstrates how to set client authentication in the /var/lib/pgsql/data/pg_hba.conf file. 7.4.5. Configuring TLS encryption on a PostgreSQL server By default, PostgreSQL uses unencrypted connections. For more secure connections, you can enable Transport Layer Security (TLS) support on the PostgreSQL server and configure your clients to establish encrypted connections. Prerequisites The PostgreSQL server is installed. The database cluster is initialized. Procedure Install the OpenSSL library: Generate a TLS certificate and a key: Replace dbhost.yourdomain.com with your database host and domain name. Copy your signed certificate and your private key to the required locations on the database server: Change the owner and group ownership of the signed certificate and your private key to the postgres user: Restrict the permissions for your private key so that it is readable only by the owner: Set the password hashing algorithm to scram-sha-256 by changing the following line in the /var/lib/pgsql/data/postgresql.conf file: to: Configure PostgreSQL to use SSL/TLS by changing the following line in the /var/lib/pgsql/data/postgresql.conf file: to: Restrict access to all databases to accept only connections from clients using TLS by changing the following line for the IPv4 local connections in the /var/lib/pgsql/data/pg_hba.conf file: to: Alternatively, you can restrict access for a single database and a user by adding the following new line: Replace mydatabase with the database name and mydbuser with the username. Make the changes effective by restarting the postgresql service: Verification To manually verify that the connection is encrypted: Connect to the PostgreSQL database as the mydbuser user, specify the hostname and the database name: Replace mydatabase with the database name and mydbuser with the username. Obtain information about the current database connection: You can write a simple application that verifies whether a connection to PostgreSQL is encrypted. This example demonstrates such an application written in C that uses the libpq client library, which is provided by the libpq-devel package: Replace mypassword with the password, mydatabase with the database name, and mydbuser with the username. Note You must load the pq libraries for compilation by using the -lpq option. For example, to compile the application by using the GCC compiler: where the source_file.c contains the example code above, and myapplication is the name of your application for verifying secured PostgreSQL connection. Example 7.4. Initializing, creating, and connecting to a PostgreSQL database using TLS encryption This example demonstrates how to initialize a PostgreSQL database, create a database user and a database, and how to connect to the database using a secured connection. Install the PosgreSQL server: Initialize the database cluster: Install the OpenSSL library: Generate a TLS certificate and a key: Replace dbhost.yourdomain.com with your database host and domain name. Copy your signed certificate and your private key to the required locations on the database server: Change the owner and group ownership of the signed certificate and your private key to the postgres user: Restrict the permissions for your private key so that it is readable only by the owner: Set the password hashing algorithm to scram-sha-256 . In the /var/lib/pgsql/data/postgresql.conf file, change the following line: to: Configure PostgreSQL to use SSL/TLS. In the /var/lib/pgsql/data/postgresql.conf file, change the following line: to: Start the postgresql service: Log in as the system user named postgres : Start the PostgreSQL interactive terminal as the postgres user: Create a user named mydbuser and set a password for mydbuser : Create a database named mydatabase : Grant all permissions to the mydbuser user: Log out of the interactive terminal: Log out of the postgres user session: Restrict access to all databases to accept only connections from clients using TLS by changing the following line for the IPv4 local connections in the /var/lib/pgsql/data/pg_hba.conf file: to: Make the changes effective by restarting the postgresql service: Connect to the PostgreSQL database as the mydbuser user, specify the hostname and the database name: 7.4.6. Backing up PostgreSQL data To back up PostgreSQL data, use one of the following approaches: SQL dump File system level backup Continuous archiving 7.4.6.1. Backing up PostgreSQL data with an SQL dump The SQL dump method is based on generating a dump file with SQL commands. When a dump is uploaded back to the database server, it recreates the database in the same state as it was at the time of the dump. The SQL dump is ensured by the following PostgreSQL client applications: pg_dump dumps a single database without cluster-wide information about roles or tablespaces pg_dumpall dumps each database in a given cluster and preserves cluster-wide data, such as role and tablespace definitions. By default, the pg_dump and pg_dumpall commands write their results into the standard output. To store the dump in a file, redirect the output to an SQL file. The resulting SQL file can be either in a text format or in other formats that allow for parallelism and for more detailed control of object restoration. You can perform the SQL dump from any remote host that has access to the database. 7.4.6.1.1. Advantages and disadvantages of an SQL dump An SQL dump has the following advantages compared to other PostgreSQL backup methods: An SQL dump is the only PostgreSQL backup method that is not server version-specific. The output of the pg_dump utility can be reloaded into later versions of PostgreSQL , which is not possible for file system level backups or continuous archiving. An SQL dump is the only method that works when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server. An SQL dump provides internally consistent dumps. A dump represents a snapshot of the database at the time pg_dump began running. The pg_dump utility does not block other operations on the database when it is running. A disadvantage of an SQL dump is that it takes more time compared to file system level backup. 7.4.6.1.2. Performing an SQL dump using pg_dump To dump a single database without cluster-wide information, use the pg_dump utility. Prerequisites You must have read access to all tables that you want to dump. To dump the entire database, you must run the commands as the postgres superuser or a user with database administrator privileges. Procedure Dump a database without cluster-wide information: To specify which database server pg_dump will contact, use the following command-line options: The -h option to define the host. The default host is either the local host or what is specified by the PGHOST environment variable. The -p option to define the port. The default port is indicated by the PGPORT environment variable or the compiled-in default. 7.4.6.1.3. Performing an SQL dump using pg_dumpall To dump each database in a given database cluster and to preserve cluster-wide data, use the pg_dumpall utility. Prerequisites You must run the commands as the postgres superuser or a user with database administrator privileges. Procedure Dump all databases in the database cluster and preserve cluster-wide data: To specify which database server pg_dumpall will contact, use the following command-line options: The -h option to define the host. The default host is either the local host or what is specified by the PGHOST environment variable. The -p option to define the port. The default port is indicated by the PGPORT environment variable or the compiled-in default. The -l option to define the default database. This option enables you to choose a default database different from the postgres database created automatically during initialization. 7.4.6.1.4. Restoring a database dumped using pg_dump To restore a database from an SQL dump that you dumped using the pg_dump utility, follow the steps below. Prerequisites You must run the commands as the postgres superuser or a user with database administrator privileges. Procedure Create a new database: Verify that all users who own objects or were granted permissions on objects in the dumped database already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions. Run the psql utility to restore a text file dump created by the pg_dump utility: where dumpfile is the output of the pg_dump command. To restore a non-text file dump, use the pg_restore utility instead: 7.4.6.1.5. Restoring databases dumped using pg_dumpall To restore data from a database cluster that you dumped using the pg_dumpall utility, follow the steps below. Prerequisites You must run the commands as the postgres superuser or a user with database administrator privileges. Procedure Ensure that all users who own objects or were granted permissions on objects in the dumped databases already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions. Run the psql utility to restore a text file dump created by the pg_dumpall utility: where dumpfile is the output of the pg_dumpall command. 7.4.6.1.6. Performing an SQL dump of a database on another server Dumping a database directly from one server to another is possible because pg_dump and psql can write to and read from pipes. Procedure To dump a database from one server to another, run: 7.4.6.1.7. Handling SQL errors during restore By default, psql continues to execute if an SQL error occurs, causing the database to restore only partially. To change the default behavior, use one of the following approaches when restoring a dump. Prerequisites You must run the commands as the postgres superuser or a user with database administrator privileges. Procedure Make psql exit with an exit status of 3 if an SQL error occurs by setting the ON_ERROR_STOP variable: Specify that the whole dump is restored as a single transaction so that the restore is either fully completed or canceled. When restoring a text file dump using the psql utility: When restoring a non-text file dump using the pg_restore utility: Note that when using this approach, even a minor error can cancel a restore operation that has already run for many hours. Additional resources PostgreSQL Documentation - SQL dump 7.4.6.2. Backing up PostgreSQL data with a file system level backup To create a file system level backup, copy PostgreSQL database files to another location. For example, you can use any of the following approaches: Create an archive file using the tar utility. Copy the files to a different location using the rsync utility. Create a consistent snapshot of the data directory. 7.4.6.2.1. Advantages and limitations of file system backing up File system level backing up has the following advantage compared to other PostgreSQL backup methods: File system level backing up is usually faster than an SQL dump. File system level backing up has the following limitations compared to other PostgreSQL backup methods: This backing up method is not suitable when you want to upgrade from RHEL 7 to RHEL 8 and migrate your data to the upgraded system. File system level backup is specific to an architecture and a RHEL major version. You can restore your data on your RHEL 7 system if the upgrade is not successful but you cannot restore the data on a RHEL 8 system. The database server must be shut down before backing up and restoring data. Backing up and restoring certain individual files or tables is impossible. Backing up a file system works only for complete backing up and restoring of an entire database cluster. 7.4.6.2.2. Performing file system level backing up To perform file system level backing up, use the following procedure. Procedure Choose the location of a database cluster and initialize this cluster: Stop the postgresql service: Use any method to create a file system backup, for example a tar archive: Start the postgresql service: Additional resources PostgreSQL Documentation - file system level backup 7.4.6.3. Backing up PostgreSQL data by continuous archiving PostgreSQL records every change made to the database's data files into a write ahead log (WAL) file that is available in the pg_wal/ subdirectory of the cluster's data directory. This log is intended primarily for a crash recovery. After a crash, the log entries made since the last checkpoint can be used for restoring the database to a consistency. The continuous archiving method, also known as an online backup, combines the WAL files with a copy of the database cluster in the form of a base backup performed on a running server or a file system level backup. If a database recovery is needed, you can restore the database from the copy of the database cluster and then replay log from the backed up WAL files to bring the system to the current state. With the continuous archiving method, you must keep a continuous sequence of all archived WAL files that extends at minimum back to the start time of your last base backup. Therefore the ideal frequency of base backups depends on: The storage volume available for archived WAL files. The maximum possible duration of data recovery in situations when recovery is necessary. In cases with a long period since the last backup, the system replays more WAL segments, and the recovery therefore takes more time. Note You cannot use pg_dump and pg_dumpall SQL dumps as a part of a continuous archiving backup solution. SQL dumps produce logical backups and do not contain enough information to be used by a WAL replay. 7.4.6.3.1. Advantages and disadvantages of continuous archiving Continuous archiving has the following advantages compared to other PostgreSQL backup methods: With the continuous backup method, it is possible to use a base backup that is not entirely consistent because any internal inconsistency in the backup is corrected by the log replay. Therefore you can perform a base backup on a running PostgreSQL server. A file system snapshot is not needed; tar or a similar archiving utility is sufficient. Continuous backup can be achieved by continuing to archive the WAL files because the sequence of WAL files for the log replay can be indefinitely long. This is particularly valuable for large databases. Continuous backup supports point-in-time recovery. It is not necessary to replay the WAL entries to the end. The replay can be stopped at any point and the database can be restored to its state at any time since the base backup was taken. If the series of WAL files are continuously available to another machine that has been loaded with the same base backup file, it is possible to restore the other machine with a nearly-current copy of the database at any point. Continuous archiving has the following disadvantages compared to other PostgreSQL backup methods: Continuous backup method supports only restoration of an entire database cluster, not a subset. Continuous backup requires extensive archival storage. 7.4.6.3.2. Setting up WAL archiving A running PostgreSQL server produces a sequence of write ahead log (WAL) records. The server physically divides this sequence into WAL segment files, which are given numeric names that reflect their position in the WAL sequence. Without WAL archiving, the segment files are reused and renamed to higher segment numbers. When archiving WAL data, the contents of each segment file are captured and saved at a new location before the segment file is reused. You have multiple options where to save the content, such as an NFS-mounted directory on another machine, a tape drive, or a CD. Note that WAL records do not include changes to configuration files. To enable WAL archiving, use the following procedure. Procedure In the /var/lib/pgsql/data/postgresql.conf file: Set the wal_level configuration parameter to replica or higher. Set the archive_mode parameter to on . Specify the shell command in the archive_command configuration parameter. You can use the cp command, another command, or a shell script. Note The archive command is executed only on completed WAL segments. A server that generates little WAL traffic can have a substantial delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived data can be, you can: Set the archive_timeout parameter to force the server to switch to a new WAL segment file with a given frequency. Use the pg_switch_wal parameter to force a segment switch to ensure that a transaction is archived immediately after it finishes. Example 7.5. Shell command for archiving WAL segments This example shows a simple shell command you can set in the archive_command configuration parameter. The following command copies a completed segment file to the required location: where the %p parameter is replaced by the relative path to the file to archive and the %f parameter is replaced by the file name. This command copies archivable WAL segments to the /mnt/server/archivedir/ directory. After replacing the %p and %f parameters, the executed command looks as follows: A similar command is generated for each new file that is archived. Restart the postgresql service to enable the changes: Test your archive command and ensure it does not overwrite an existing file and that it returns a nonzero exit status if it fails. To protect your data, ensure that the segment files are archived into a directory that does not have group or world read access. Additional resources PostgreSQL 16 Documentation 7.4.6.3.3. Making a base backup You can create a base backup in several ways. The simplest way of performing a base backup is using the pg_basebackup utility on a running PostgreSQL server. The base backup process creates a backup history file that is stored into the WAL archive area and is named after the first WAL segment file that you need for the base backup. The backup history file is a small text file containing the starting and ending times, and WAL segments of the backup. If you used the label string to identify the associated dump file, you can use the backup history file to determine which dump file to restore. Note Consider keeping several backup sets to be certain that you can recover your data. Prerequisites You must run the commands as the postgres superuser, a user with database administrator privileges, or another user with at least REPLICATION permissions. You must keep all the WAL segment files generated during and after the base backup. Procedure Use the pg_basebackup utility to perform the base backup. To create a base backup as individual files (plain format): Replace backup_directory with your chosen backup location. If you use tablespaces and perform the base backup on the same host as the server, you must also use the --tablespace-mapping option, otherwise the backup will fail upon an attempt to write the backup to the same location. To create a base backup as a tar archive ( tar and compressed format): Replace backup_directory with your chosen backup location. To restore such data, you must manually extract the files in the correct locations. To specify which database server pg_basebackup will contact, use the following command-line options: The -h option to define the host. The default host is either the local host or a host specified by the PGHOST environment variable. The -p option to define the port. The default port is indicated by the PGPORT environment variable or the compiled-in default. After the base backup process is complete, safely archive the copy of the database cluster and the WAL segment files used during the backup, which are specified in the backup history file. Delete WAL segments numerically lower than the WAL segment files used in the base backup because these are older than the base backup and no longer needed for a restore. Additional resources PostgreSQL Documentation - base backup PostgreSQL Documentation - pg_basebackup utility 7.4.6.3.4. Restoring the database using a continuous archive backup To restore a database using a continuous backup, use the following procedure. Procedure Stop the server: Copy the necessary data to a temporary location. Preferably, copy the whole cluster data directory and any tablespaces. Note that this requires enough free space on your system to hold two copies of your existing database. If you do not have enough space, save the contents of the cluster's pg_wal directory, which can contain logs that were not archived before the system went down. Remove all existing files and subdirectories under the cluster data directory and under the root directories of any tablespaces you are using. Restore the database files from your base backup. Ensure that: The files are restored with the correct ownership (the database system user, not root ). The files are restored with the correct permissions. The symbolic links in the pg_tblspc/ subdirectory are restored correctly. Remove any files present in the pg_wal/ subdirectory. These files resulted from the base backup and are therefore obsolete. If you did not archive pg_wal/ , recreate it with proper permissions. Copy any unarchived WAL segment files that you saved in step 2 into pg_wal/ . Create the recovery.conf recovery command file in the cluster data directory and specify the shell command in the restore_command configuration parameter. You can use the cp command, another command, or a shell script. For example: Start the server: The server will enter the recovery mode and proceed to read through the archived WAL files that it needs. If the recovery is terminated due to an external error, the server can be restarted and it will continue the recovery. When the recovery process is completed, the server renames recovery.conf to recovery.done . This prevents the server from accidental re-entering the recovery mode after it starts normal database operations. Check the contents of the database to verify that the database has recovered into the required state. If the database has not recovered into the required state, return to step 1. If the database has recovered into the required state, allow the users to connect by restoring the client authentication configuration in the pg_hba.conf file. 7.4.6.3.4.1. Additional resources Continuous archiving method 7.4.7. Migrating to a RHEL 8 version of PostgreSQL Red Hat Enterprise Linux 7 contains PostgreSQL 9.2 as the default version of the PostgreSQL server. In addition, several versions of PostgreSQL are provided as Software Collections for RHEL 7. Red Hat Enterprise Linux 8 provides PostgreSQL 10 as the default postgresql stream, PostgreSQL 9.6 , PostgreSQL 12 , PostgreSQL 13 , PostgreSQL 15 , and PostgreSQL 16 . Users of PostgreSQL on Red Hat Enterprise Linux can use two migration paths for the database files: Fast upgrade using the pg_upgrade utility Dump and restore upgrade The fast upgrade method is quicker than the dump and restore process. However, in certain cases, the fast upgrade does not work, and you can only use the dump and restore process. Such cases include: Cross-architecture upgrades Systems using the plpython or plpython2 extensions. Note that RHEL 8 AppStream repository includes only the postgresql-plpython3 package, not the postgresql-plpython2 package. Fast upgrade is not supported for migration from Red Hat Software Collections versions of PostgreSQL . As a prerequisite for migration to a later version of PostgreSQL , back up all your PostgreSQL databases. Dumping the databases and performing backup of the SQL files is required for the dump and restore process and recommended for the fast upgrade method. Before migrating to a later version of PostgreSQL , see the upstream compatibility notes for the version of PostgreSQL to which you want to migrate, and for all skipped PostgreSQL versions between the one you are migrating from and the target version. 7.4.7.1. Notable differences between PostgreSQL 15 and PostgreSQL 16 PostgreSQL 16 introduced the following notable changes. The postmasters binary is no longer available PostgreSQL is no longer distributed with the postmaster binary. Users who start the postgresql server by using the provided systemd unit file (the systemctl start postgres.service command) are not affected by this change. If you previously started the postgresql server directly through the postmaster binary, you must now use the postgres binary instead. Documentation is no longer packaged PostgreSQL no longer provides documentation in PDF format within the package. Use the online documentation instead. 7.4.7.2. Notable differences between PostgreSQL 13 and PostgreSQL 15 PostgreSQL 15 introduced the following backwards incompatible changes. Default permissions of the public schema The default permissions of the public schema have been modified in PostgreSQL 15 . Newly created users need to grant permission explicitly by using the GRANT ALL ON SCHEMA public TO myuser; command. The following example works in PostgreSQL 13 and earlier: The following example works in PostgreSQL 15 and later: Note Ensure that the mydbuser access is configured appropriately in the pg_hba.conf file. See Creating PostgreSQL users for more information. PQsendQuery() no longer supported in pipeline mode Since PostgreSQL 15 , the libpq PQsendQuery() function is no longer supported in pipeline mode. Modify affected applications to use the PQsendQueryParams() function instead. 7.4.7.3. Fast upgrade using the pg_upgrade utility During a fast upgrade, you must copy binary data files to the /var/lib/pgsql/data/ directory and use the pg_upgrade utility. You can use this method for migrating data: From the RHEL 7 system version of PostgreSQL 9.2 to the RHEL 8 version of PostgreSQL 10 From the RHEL 8 version of PostgreSQL 10 to a RHEL version of PostgreSQL 12 From the RHEL 8 version of PostgreSQL 12 to a RHEL version of PostgreSQL 13 From a RHEL version of PostgreSQL 13 to a RHEL version of PostgreSQL 15 From a RHEL version of PostgreSQL 15 to a RHEL version of PostgreSQL 16 If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data. For migrating between other combinations of PostgreSQL versions within RHEL, and for migration from the Red Hat Software Collections versions of PostgreSQL to RHEL, use Dump and restore upgrade . The following procedure describes migration from the RHEL 7 system version of PostgreSQL 9.2 to a RHEL 8 version of PostgreSQL using the fast upgrade method. Prerequisites Before performing the upgrade, back up all your data stored in the PostgreSQL databases. By default, all data is stored in the /var/lib/pgsql/data/ directory on both the RHEL 7 and RHEL 8 systems. Procedure On the RHEL 8 system, enable the stream (version) to which you want to migrate: Replace stream with the selected version of the PostgreSQL server. You can omit this step if you want to use the default stream, which provides PostgreSQL 10 . On the RHEL 8 system, install the postgresql-server and postgresql-upgrade packages: Optionally, if you used any PostgreSQL server modules on RHEL 7, install them also on the RHEL 8 system in two versions, compiled both against PostgreSQL 9.2 (installed as the postgresql-upgrade package) and the target version of PostgreSQL (installed as the postgresql-server package). If you need to compile a third-party PostgreSQL server module, build it both against the postgresql-devel and postgresql-upgrade-devel packages. Check the following items: Basic configuration: On the RHEL 8 system, check whether your server uses the default /var/lib/pgsql/data directory and the database is correctly initialized and enabled. In addition, the data files must be stored in the same path as mentioned in the /usr/lib/systemd/system/postgresql.service file. PostgreSQL servers: Your system can run multiple PostgreSQL servers. Ensure that the data directories for all these servers are handled independently. PostgreSQL server modules: Ensure that the PostgreSQL server modules that you used on RHEL 7 are installed on your RHEL 8 system as well. Note that plugins are installed in the /usr/lib64/pgsql/ directory (or in the /usr/lib/pgsql/ directory on 32-bit systems). Ensure that the postgresql service is not running on either of the source and target systems at the time of copying data. Copy the database files from the source location to the /var/lib/pgsql/data/ directory on the RHEL 8 system. Perform the upgrade process by running the following command as the PostgreSQL user: This launches the pg_upgrade process in the background. In case of failure, postgresql-setup provides an informative error message. Copy the prior configuration from /var/lib/pgsql/data-old to the new cluster. Note that the fast upgrade does not reuse the prior configuration in the newer data stack and the configuration is generated from scratch. If you want to combine the old and new configurations manually, use the *.conf files in the data directories. Start the new PostgreSQL server: Analyze the new database cluster. For PostgreSQL 13 or earlier: For PostgreSQL 15 or later: Note You may need to use ALTER COLLATION name REFRESH VERSION , see the upstream documentation for details. If you want the new PostgreSQL server to be automatically started on boot, run: 7.4.7.4. Dump and restore upgrade When using the dump and restore upgrade, you must dump all databases contents into an SQL file dump file. Note that the dump and restore upgrade is slower than the fast upgrade method and it might require some manual fixing in the generated SQL file. You can use this method for migrating data from: The Red Hat Enterprise Linux 7 system version of PostgreSQL 9.2 Any earlier Red Hat Enterprise Linux 8 version of PostgreSQL An earlier or equal version of PostgreSQL from Red Hat Software Collections: PostgreSQL 9.2 (no longer supported) PostgreSQL 9.4 (no longer supported) PostgreSQL 9.6 (no longer supported) PostgreSQL 10 PostgreSQL 12 PostgreSQL 13 On RHEL 7 and RHEL 8 systems, PostgreSQL data is stored in the /var/lib/pgsql/data/ directory by default. In case of Red Hat Software Collections versions of PostgreSQL , the default data directory is /var/opt/rh/collection_name/lib/pgsql/data/ (with the exception of postgresql92 , which uses the /opt/rh/postgresql92/root/var/lib/pgsql/data/ directory). If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data. To perform the dump and restore upgrade, change the user to root . The following procedure describes migration from the RHEL 7 system version of PostgreSQL 9.2 to a RHEL 8 version of PostgreSQL . Procedure On your RHEL 7 system, start the PostgreSQL 9.2 server: On the RHEL 7 system, dump all databases contents into the pgdump_file.sql file: Verify that the databases were dumped correctly: As a result, the path to the dumped sql file is displayed: /var/lib/pgsql/pgdump_file.sql . On the RHEL 8 system, enable the stream (version) to which you wish to migrate: Replace stream with the selected version of the PostgreSQL server. You can omit this step if you want to use the default stream, which provides PostgreSQL 10 . On the RHEL 8 system, install the postgresql-server package: Optionally, if you used any PostgreSQL server modules on RHEL 7, install them also on the RHEL 8 system. If you need to compile a third-party PostgreSQL server module, build it against the postgresql-devel package. On the RHEL 8 system, initialize the data directory for the new PostgreSQL server: On the RHEL 8 system, copy the pgdump_file.sql into the PostgreSQL home directory, and check that the file was copied correctly: Copy the configuration files from the RHEL 7 system: The configuration files to be copied are: /var/lib/pgsql/data/pg_hba.conf /var/lib/pgsql/data/pg_ident.conf /var/lib/pgsql/data/postgresql.conf On the RHEL 8 system, start the new PostgreSQL server: On the RHEL 8 system, import data from the dumped sql file: Note When upgrading from a Red Hat Software Collections version of PostgreSQL , adjust the commands to include scl enable collection_name. For example, to dump data from the rh-postgresql96 Software Collection, use the following command: 7.4.8. Installing and configuring a PostgreSQL database server by using RHEL system roles You can use the postgresql RHEL system role to automate the installation and management of the PostgreSQL database server. By default, this role also optimizes PostgreSQL by automatically configuring performance-related settings in the PostgreSQL service configuration files. 7.4.8.1. Configuring PostgreSQL with an existing TLS certificate by using the postgresql RHEL system role If your application requires a PostgreSQL database server, you can configure this service with TLS encryption to enable secure communication between the application and the database. By using the postgresql RHEL system role, you can automate this process and remotely install and configure PostgreSQL with TLS encryption. In the playbook, you can use an existing private key and a TLS certificate that was issued by a certificate authority (CA). Note The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task that uses the firewall RHEL system role to your playbook. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Both the private key of the managed node and the certificate are stored on the control node in the following files: Private key: ~/ <FQDN_of_the_managed_node> .key Certificate: ~/ <FQDN_of_the_managed_node> .crt Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: "~/{{ inventory_hostname }}.crt" dest: "/etc/postgresql/server.crt" - name: Copy private key ansible.builtin.copy: src: "~/{{ inventory_hostname }}.key" dest: "/etc/postgresql/server.key" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_cert_name: "/etc/postgresql/server" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled The settings specified in the example playbook include the following: postgresql_version: <version> Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node. You cannot upgrade or downgrade PostgreSQL by changing the postgresql_version variable and running the playbook again. postgresql_password: <password> Sets the password of the postgres database superuser. You cannot change the password by changing the postgresql_password variable and running the playbook again. postgresql_cert_name: <private_key_and_certificate_file> Defines the path and base name of both the certificate and private key on the managed node without .crt and key suffixes. During the PostgreSQL configuration, the role creates symbolic links in the /var/lib/pgsql/data/ directory that refer to these files. The certificate and private key must exist locally on the managed node. You can use tasks with the ansible.builtin.copy module to transfer the files from the control node to the managed node, as shown in the playbook. postgresql_server_conf: <list_of_settings> Defines postgresql.conf settings the role should set. The role adds these settings to the /etc/postgresql/system-roles.conf file and includes this file at the end of /var/lib/pgsql/data/postgresql.conf . Consequently, settings from the postgresql_server_conf variable override settings in /var/lib/pgsql/data/postgresql.conf . Re-running the playbook with different settings in postgresql_server_conf overwrites the /etc/postgresql/system-roles.conf file with the new settings. postgresql_pg_hba_conf: <list_of_authentication_entries> Configures client authentication entries in the /var/lib/pgsql/data/pg_hba.conf file. For details, see see the PostgreSQL documentation. The example allows the following connections to PostgreSQL: Unencrypted connections by using local UNIX domain sockets. TLS-encrypted connections to the IPv4 and IPv6 localhost addresses. TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the listen_addresses setting in the postgresql_server_conf variable appropriately. Re-running the playbook with different settings in postgresql_pg_hba_conf overwrites the /var/lib/pgsql/data/pg_hba.conf file with the new settings. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Use the postgres super user to connect to a PostgreSQL server and execute the \conninfo meta command: If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled. Additional resources /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file /usr/share/doc/rhel-system-roles/postgresql/ directory Ansible vault 7.4.8.2. Configuring PostgreSQL with a TLS certificate issued from IdM by using the postgresql RHEL system role If your application requires a PostgreSQL database server, you can configure the PostgreSQL service with TLS encryption to enable secure communication between the application and the database. If the PostgreSQL host is a member of a Red Hat Enterprise Linux Identity Management (IdM) domain, the certmonger service can manage the certificate request and future renewals. By using the postgresql RHEL system role, you can automate this process. You can remotely install and configure PostgreSQL with TLS encryption, and the postgresql role uses the certificate RHEL system role to configure certmonger and request a certificate from IdM. Note The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task to your playbook that uses the firewall RHEL system role. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You enrolled the managed node in an IdM domain. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: "{{ inventory_hostname }}" ca: ipa principal: "postgresql/{{ inventory_hostname }}@EXAMPLE.COM" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled The settings specified in the example playbook include the following: postgresql_version: <version> Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node. You cannot upgrade or downgrade PostgreSQL by changing the postgresql_version variable and running the playbook again. postgresql_password: <password> Sets the password of the postgres database superuser. You cannot change the password by changing the postgresql_password variable and running the playbook again. postgresql_certificates: <certificate_role_settings> A list of YAML dictionaries with settings for the certificate role. postgresql_server_conf: <list_of_settings> Defines postgresql.conf settings you want the role to set. The role adds these settings to the /etc/postgresql/system-roles.conf file and includes this file at the end of /var/lib/pgsql/data/postgresql.conf . Consequently, settings from the postgresql_server_conf variable override settings in /var/lib/pgsql/data/postgresql.conf . Re-running the playbook with different settings in postgresql_server_conf overwrites the /etc/postgresql/system-roles.conf file with the new settings. postgresql_pg_hba_conf: <list_of_authentication_entries> Configures client authentication entries in the /var/lib/pgsql/data/pg_hba.conf file. For details, see see the PostgreSQL documentation. The example allows the following connections to PostgreSQL: Unencrypted connections by using local UNIX domain sockets. TLS-encrypted connections to the IPv4 and IPv6 localhost addresses. TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the listen_addresses setting in the postgresql_server_conf variable appropriately. Re-running the playbook with different settings in postgresql_pg_hba_conf overwrites the /var/lib/pgsql/data/pg_hba.conf file with the new settings. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Use the postgres super user to connect to a PostgreSQL server and execute the \conninfo meta command: If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled. Additional resources /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file /usr/share/doc/rhel-system-roles/postgresql/ directory Ansible vault
[ "yum module install mariadb:10.3/server", "systemctl start mariadb.service", "systemctl enable mariadb.service", "mysql_secure_installation", "podman login registry.redhat.io", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mariadb_root_password> -p <host_port_1> :3306 rhel8/mariadb-103", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mariadb_root_password> -p <host_port_2> :3306 rhel8/mariadb-105", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mariadb_root_password> -p <host_port_3> :3306 rhel8/mariadb-1011", "firewall-cmd --permanent --add-port={ <host_port_1> /tcp, <host_port_2> /tcp, <host_port_3> /tcp...} firewall-cmd --reload", "podman ps", "mysql -u root -p -h localhost -P <host_port> --protocol tcp", "systemctl restart mariadb.service", "mv <path> /server.example.com.crt.pem /etc/pki/tls/certs/ mv <path> /ca.crt.pem /etc/pki/tls/certs/", "chmod 644 /etc/pki/tls/certs/server.example.com.crt.pem /etc/pki/tls/certs/ca.crt.pem", "mv <path> /server.example.com.key.pem /etc/pki/tls/private/", "chmod 640 /etc/pki/tls/private/server.example.com.key.pem chgrp mysql /etc/pki/tls/private/server.example.com.key.pem", "restorecon -Rv /etc/pki/tls/", "[mariadb] ssl_key = /etc/pki/tls/private/server.example.com.key.pem ssl_cert = /etc/pki/tls/certs/server.example.com.crt.pem ssl_ca = /etc/pki/tls/certs/ca.crt.pem", "ssl_crl = /etc/pki/tls/certs/example.crl.pem", "require_secure_transport = on", "tls_version = TLSv1.2 , TLSv1.3", "systemctl restart mariadb.service", "mysql -u root -p -e \"SHOW GLOBAL VARIABLES LIKE 'have_ssl';\" +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | have_ssl | YES | +---------------+-----------------+", "mysql -u root -p -e \"SHOW GLOBAL VARIABLES LIKE 'tls_version';\" +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | tls_version | TLSv1.2,TLSv1.3 | +---------------+-----------------+", "mysql -u root -p -h server.example.com", "MariaDB [(none)]> ALTER USER 'example '@ '%' REQUIRE SSL;", "mysql -u example -p -h server.example.com --ssl MariaDB [(none)]>", "mysql -u example -p -h server.example.com --skip-ssl ERROR 1045 (28000): Access denied for user 'example'@'server.example.com' (using password: YES)", "cp <path> /ca.crt.pem /etc/pki/ca-trust/source/anchors/", "chmod 644 /etc/pki/ca-trust/source/anchors/ca.crt.pem", "update-ca-trust", "[client-mariadb] ssl ssl-verify-server-cert", "mysql -u root -p -h server.example.com -e status SSL: Cipher in use is TLS_AES_256_GCM_SHA384", "mysql -u root -p -h localhost -e status ERROR 2026 (HY000): SSL connection error: Validation of SSL server certificate failed", "mysqldump [ options ] --databases db_name > backup-file.sql", "mysqldump [ options ] --databases db_name1 [ db_name2 ... ] > backup-file.sql", "mysqldump [ options ] --all-databases > backup-file.sql", "mysql < backup-file.sql", "mysql --host= remote_host < backup-file.sql", "mysqldump [ options ] db_name [ tbl_name ... ] > backup-file.sql", "mysql db_name < backup-file.sql", "mysqldump --help", "yum install mariadb-backup", "mariabackup --backup --target-dir <backup_directory> --user <backup_user> --password <backup_passwd>", "[xtrabackup] user=myuser password=mypassword", "mariabackup --backup --target-dir <backup_directory>", "systemctl stop mariadb.service", "mariabackup --copy-back --target-dir=/var/mariadb/backup/", "mariabackup --move-back --target-dir=/var/mariadb/backup/", "chown -R mysql:mysql /var/lib/mysql/", "systemctl start mariadb.service", "systemctl stop mariadb.service", "cp -r /var/lib/mysql / backup-location", "cp -r /etc/my.cnf /etc/my.cnf.d / backup-location /configuration", "cp /var/log/mariadb/* /backup-location /logs", "systemctl start mariadb.service", "chown -R mysql:mysql /var/lib/mysql", "yum install mariadb-server", "systemctl stop mariadb.service", "restorecon -vr /var/lib/mysql", "systemctl start mariadb.service", "mysql_upgrade", "ln -s /dev/null USDHOME/.mysql_history", "systemctl stop mariadb.service", "yum distro-sync", "yum module reset mariadb", "yum module enable mariadb:10.5", "yum distro-sync", "systemctl start mariadb.service", "galera_new_cluster", "mariadb-upgrade", "mariadb-upgrade --skip-write-binlog", "systemctl stop mariadb.service", "yum distro-sync", "yum module reset mariadb", "yum module enable mariadb:10.11", "yum distro-sync", "systemctl start mariadb.service", "galera_new_cluster", "mariadb-upgrade", "mariadb-upgrade --skip-write-binlog", "TLS Web Server Authentication, TLS Web Client Authentication", "yum module install mariadb:10.3/galera", "gcomm://", "wsrep_cluster_address=\"gcomm://\"", "wsrep_cluster_address=\"gcomm://10.0.0.10\"", "wsrep_provider_options=\"socket.ssl_cert=/etc/pki/tls/certs/source.crt;socket.ssl_key=/etc/pki/tls/private/source.key;socket.ssl_ca=/etc/pki/tls/certs/ca.crt\"", "galera_new_cluster", "galera_new_cluster mariadb@node1", "systemctl start mariadb.service", "[mariadb] wsrep_cluster_address=\" gcomm://192.168.0.1 \"", "yum module install mysql:8.0/server", "systemctl start mysqld.service", "systemctl enable mysqld.service", "mysql_secure_installation", "podman login registry.redhat.io", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mysql_root_password> -p <host_port_1> :3306 rhel8/mysql-80", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mariadb_root_password> -p <host_port_2> :3306 rhel8/mariadb-105", "podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD= <mariadb_root_password> -p <host_port_3> :3306 rhel8/mariadb-1011", "firewall-cmd --permanent --add-port={ <host_port> /tcp, <host_port> /tcp,...} firewall-cmd --reload", "podman ps", "mysql -u root -p -h localhost -P <host_port> --protocol tcp", "systemctl restart mysqld.service", "mv <path> /server.example.com.crt.pem /etc/pki/tls/certs/ mv <path> /ca.crt.pem /etc/pki/tls/certs/", "chmod 644 /etc/pki/tls/certs/server.example.com.crt.pem /etc/pki/tls/certs/ca.crt.pem", "mv <path> /server.example.com.key.pem /etc/pki/tls/private/", "chmod 640 /etc/pki/tls/private/server.example.com.key.pem chgrp mysql /etc/pki/tls/private/server.example.com.key.pem", "restorecon -Rv /etc/pki/tls/", "[mysqld] ssl_key = /etc/pki/tls/private/server.example.com.key.pem ssl_cert = /etc/pki/tls/certs/server.example.com.crt.pem ssl_ca = /etc/pki/tls/certs/ca.crt.pem", "ssl_crl = /etc/pki/tls/certs/example.crl.pem", "require_secure_transport = on", "tls_version = TLSv1.2 , TLSv1.3", "systemctl restart mysqld.service", "mysql -u root -p -h <MySQL_server_hostname> -e \"SHOW session status LIKE 'Ssl_cipher';\" +---------------+------------------------+ | Variable_name | Value | +---------------+------------------------+ | Ssl_cipher | TLS_AES_256_GCM_SHA384 | +---------------+------------------------+", "mysql -u root -p -e \"SHOW GLOBAL VARIABLES LIKE 'tls_version';\" +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | tls_version | TLSv1.2,TLSv1.3 | +---------------+-----------------+", "mysql -u root -e \"SHOW GLOBAL VARIABLES WHERE Variable_name REGEXP '^ssl_ca|^ssl_cert|^ssl_key';\" +-----------------+-------------------------------------------------+ | Variable_name | Value | +-----------------+-------------------------------------------------+ | ssl_ca | /etc/pki/tls/certs/ca.crt.pem | | ssl_capath | | | ssl_cert | /etc/pki/tls/certs/server.example.com.crt.pem | | ssl_key | /etc/pki/tls/private/server.example.com.key.pem | +-----------------+-------------------------------------------------+", "mysql -u root -p -h server.example.com", "MySQL [(none)]> ALTER USER 'example '@ '%' REQUIRE SSL;", "mysql -u example -p -h server.example.com MySQL [(none)]>", "mysql -u example -p -h server.example.com --ssl-mode=DISABLED ERROR 1045 (28000): Access denied for user 'example'@'server.example.com' (using password: YES)", "[client] ssl-mode=VERIFY_IDENTITY ssl-ca=/etc/pki/tls/certs/ca.crt.pem", "mysql -u root -p -h server.example.com -e status SSL: Cipher in use is TLS_AES_256_GCM_SHA384", "mysql -u root -p -h localhost -e status ERROR 2026 (HY000): SSL connection error: error:0A000086:SSL routines::certificate verify failed", "mysqldump [ options ] --databases db_name > backup-file.sql", "mysqldump [ options ] --databases db_name1 [ db_name2 ...] > backup-file.sql", "mysqldump [ options ] --all-databases > backup-file.sql", "mysql < backup-file.sql", "mysql --host= remote_host < backup-file.sql", "mysqldump [ options ] db_name [ tbl_name ... ] > backup-file.sql", "mysql db_name < backup-file.sql", "mysqldump --help", "systemctl stop mysqld.service", "cp -r /var/lib/mysql / backup-location", "cp -r /etc/my.cnf /etc/my.cnf.d / backup-location /configuration", "cp /var/log/mysql/* /backup-location /logs", "systemctl start mysqld.service", "chown -R mysql:mysql /var/lib/mysql", "yum install mysql-server", "systemctl stop mysqld.service", "restorecon -vr /var/lib/mysql", "chown -R mysql:mysql /var/lib/mysql", "systemctl start mysqld.service", "binlog_do_db= db_name1 binlog_do_db= db_name2 binlog_do_db= db_name3", "systemctl restart mysqld.service", "binlog_do_db= db_name1 binlog_do_db= db_name2 binlog_do_db= db_name3", "systemctl restart mysqld.service", "mysql> CREATE USER ' replication_user'@'replica_server_hostname ' IDENTIFIED WITH mysql_native_password BY ' password ';", "mysql> GRANT REPLICATION SLAVE ON *.* TO ' replication_user '@' replica_server_hostname ';", "mysql> FLUSH PRIVILEGES;", "mysql> SET @@GLOBAL.read_only = ON;", "mysql> SET @@GLOBAL.read_only = ON;", "mysql> CHANGE REPLICATION SOURCE TO SOURCE_HOST=' source_hostname ', SOURCE_USER=' replication_user ', SOURCE_PASSWORD=' password ', SOURCE_AUTO_POSITION=1, SOURCE_SSL=1, SOURCE_SSL_CA=' path_to_ca_on_source ', SOURCE_SSL_CAPATH=' path_to_directory_with_certificates ', SOURCE_SSL_CERT=' path_to_source_certificate ', SOURCE_SSL_KEY=' path_to_source_key ';", "mysql> START REPLICA;", "mysql> SET @@GLOBAL.read_only = OFF;", "mysql> SHOW REPLICA STATUS\\G;", "mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;", "mysql> STOP REPLICA;", "mysql> CREATE DATABASE test_db_name ;", "mysql> SHOW MASTER STATUS;", "yum module install postgresql:16/server", "postgresql-setup --initdb", "systemctl start postgresql.service", "systemctl enable postgresql.service", "podman login registry.redhat.io", "podman run -d --name <container_name> -e POSTGRESQL_USER= <user_name> -e POSTGRESQL_PASSWORD= <password> -e POSTGRESQL_DATABASE= <database_name> -p <host_port_1> :5432 rhel8/postgresql-13", "podman run -d --name <container_name> -e POSTGRESQL_USER= <user_name> -e POSTGRESQL_PASSWORD= <password> -e POSTGRESQL_DATABASE= <database_name> -p <host_port_2> :5432 rhel8/postgresql-15", "podman run -d --name <container_name> -e POSTGRESQL_USER= <user_name> -e POSTGRESQL_PASSWORD= <password> -e POSTGRESQL_DATABASE= <database_name> -p <host_port_3> :5432 rhel8/postgresql-16", "firewall-cmd --permanent --add-port={ <host_port_1> /tcp, <host_port_2> /tcp,...} firewall-cmd --reload", "podman ps", "psql -u postgres -p -h localhost -P <host_port> --protocol tcp", "postgres=# CREATE USER mydbuser WITH PASSWORD ' mypasswd ' CREATEROLE CREATEDB;", "yum module install postgresql:13/server", "postgresql-setup --initdb * Initializing database in '/var/lib/pgsql/data' * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log", "#password_encryption = md5 # md5 or scram-sha-256", "password_encryption = scram-sha-256", "host all all 127.0.0.1/32 ident", "host all all 127.0.0.1/32 scram-sha-256", "systemctl start postgresql.service", "su - postgres", "psql psql (13.7) Type \"help\" for help. postgres=#", "postgres=# \\conninfo You are connected to database \"postgres\" as user \"postgres\" via socket in \"/var/run/postgresql\" at port \"5432\".", "postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd' CREATEROLE CREATEDB; CREATE ROLE", "postgres=# \\q", "logout", "psql -U mydbuser -h 127.0.0.1 -d postgres Password for user mydbuser: Type the password. psql (13.7) Type \"help\" for help. postgres=>", "postgres=> CREATE DATABASE mydatabase; CREATE DATABASE postgres=>", "postgres=# \\q", "psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser: psql (13.7) Type \"help\" for help. mydatabase=>", "mydatabase=> \\conninfo You are connected to database \"mydatabase\" as user \"mydbuser\" on host \"127.0.0.1\" at port \"5432\".", "systemctl restart postgresql.service", "This is a comment log_connections = yes log_destination = 'syslog' search_path = '\"USDuser\", public' shared_buffers = 128MB password_encryption = scram-sha-256", "TYPE DATABASE USER ADDRESS METHOD local all all trust host postgres all 192.168.93.0/24 ident host all all .example.com scram-sha-256", "yum install openssl", "openssl req -new -x509 -days 365 -nodes -text -out server.crt -keyout server.key -subj \"/CN= dbhost.yourdomain.com \"", "cp server.{key,crt} /var/lib/pgsql/data/.", "chown postgres:postgres /var/lib/pgsql/data/server.{key,crt}", "chmod 0400 /var/lib/pgsql/data/server.key", "#password_encryption = md5 # md5 or scram-sha-256", "password_encryption = scram-sha-256", "#ssl = off", "ssl=on", "host all all 127.0.0.1/32 ident", "hostssl all all 127.0.0.1/32 scram-sha-256", "hostssl mydatabase mydbuser 127.0.0.1/32 scram-sha-256", "systemctl restart postgresql.service", "psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser :", "mydbuser=> \\conninfo You are connected to database \"mydatabase\" as user \"mydbuser\" on host \"127.0.0.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)", "#include <stdio.h> #include <stdlib.h> #include <libpq-fe.h> int main(int argc, char* argv[]) { //Create connection PGconn* connection = PQconnectdb(\"hostaddr=127.0.0.1 password= mypassword port=5432 dbname= mydatabase user= mydbuser \"); if (PQstatus(connection) ==CONNECTION_BAD) { printf(\"Connection error\\n\"); PQfinish(connection); return -1; //Execution of the program will stop here } printf(\"Connection ok\\n\"); //Verify TLS if (PQsslInUse(connection)){ printf(\"TLS in use\\n\"); printf(\"%s\\n\", PQsslAttribute(connection,\"protocol\")); } //End connection PQfinish(connection); printf(\"Disconnected\\n\"); return 0; }", "gcc source_file.c -lpq -o myapplication", "yum module install postgresql:13/server", "postgresql-setup --initdb * Initializing database in '/var/lib/pgsql/data' * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log", "yum install openssl", "openssl req -new -x509 -days 365 -nodes -text -out server.crt -keyout server.key -subj \"/CN= dbhost.yourdomain.com \"", "cp server.{key,crt} /var/lib/pgsql/data/.", "chown postgres:postgres /var/lib/pgsql/data/server.{key,crt}", "chmod 0400 /var/lib/pgsql/data/server.key", "#password_encryption = md5 # md5 or scram-sha-256", "password_encryption = scram-sha-256", "#ssl = off", "ssl=on", "systemctl start postgresql.service", "su - postgres", "psql -U postgres psql (13.7) Type \"help\" for help. postgres=#", "postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd'; CREATE ROLE postgres=#", "postgres=# CREATE DATABASE mydatabase; CREATE DATABASE postgres=#", "postgres=# GRANT ALL PRIVILEGES ON DATABASE mydatabase TO mydbuser; GRANT postgres=#", "postgres=# \\q", "logout", "host all all 127.0.0.1/32 ident", "hostssl all all 127.0.0.1/32 scram-sha-256", "systemctl restart postgresql.service", "psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser: psql (13.7) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type \"help\" for help. mydatabase=>", "pg_dump dbname > dumpfile", "pg_dumpall > dumpfile", "createdb dbname", "psql dbname < dumpfile", "pg_restore non-plain-text-file", "psql < dumpfile", "pg_dump -h host1 dbname | psql -h host2 dbname", "psql --set ON_ERROR_STOP=on dbname < dumpfile", "psql -1", "pg_restore -e", "postgresql-setup --initdb", "systemctl stop postgresql.service", "tar -cf backup.tar /var/lib/pgsql/data/", "systemctl start postgresql.service", "archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'", "test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065", "systemctl restart postgresql.service", "pg_basebackup -D backup_directory -Fp", "pg_basebackup -D backup_directory -Ft -z", "systemctl stop postgresql.service", "restore_command = 'cp /mnt/server/archivedir/%f \"%p\"'", "systemctl start postgresql.service", "postgres=# CREATE USER mydbuser; postgres=# \\c postgres mydbuser postgres=USD CREATE TABLE mytable (id int);", "postgres=# CREATE USER mydbuser; postgres=# GRANT ALL ON SCHEMA public TO mydbuser; postgres=# \\c postgres mydbuser postgres=USD CREATE TABLE mytable (id int);", "yum module enable postgresql: stream", "yum install postgresql-server postgresql-upgrade", "systemctl stop postgresql.service", "postgresql-setup --upgrade", "systemctl start postgresql.service", "su postgres -c '~/analyze_new_cluster.sh'", "su postgres -c 'vacuumdb --all --analyze-in-stages'", "systemctl enable postgresql.service", "systemctl start postgresql.service", "su - postgres -c \"pg_dumpall > ~/pgdump_file.sql\"", "su - postgres -c 'less \"USDHOME/pgdump_file.sql\"'", "yum module enable postgresql: stream", "yum install postgresql-server", "postgresql-setup --initdb", "su - postgres -c 'test -e \"USDHOME/pgdump_file.sql\" && echo exists'", "su - postgres -c 'ls -1 USDPGDATA/*.conf'", "systemctl start postgresql.service", "su - postgres -c 'psql -f ~/pgdump_file.sql postgres'", "su - postgres -c 'scl enable rh-postgresql96 \"pg_dumpall > ~/pgdump_file.sql\"'", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.crt\" dest: \"/etc/postgresql/server.crt\" - name: Copy private key ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.key\" dest: \"/etc/postgresql/server.key\" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_cert_name: \"/etc/postgresql/server\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: \"{{ inventory_hostname }}\" ca: ipa principal: \"postgresql/{{ inventory_hostname }}@EXAMPLE.COM\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/using-databases
17.2. Managing Disk Quotas
17.2. Managing Disk Quotas If quotas are implemented, they need some maintenance mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota. 17.2.1. Enabling and Disabling It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the following command: If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified, only group quotas are disabled. The -v switch causes verbose status information to display as the command executes. To enable user and group quotas again, use the following command: To enable user and group quotas for all file systems, use the following command: If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified, only group quotas are enabled. To enable quotas for a specific file system, such as /home , use the following command: Note The quotaon command is not always needed for XFS because it is performed automatically at mount time. Refer to the man page quotaon(8) for more information. 17.2.2. Reporting on Disk Quotas Creating a disk usage report entails running the repquota utility. Example 17.6. Output of the repquota Command For example, the command repquota /home produces this output: To view the disk usage report for all (option -a ) quota-enabled file systems, use the command: While the report is easy to read, a few points should be explained. The -- displayed after each user is a quick way to determine whether the block or inode limits have been exceeded. If either soft limit is exceeded, a + appears in place of the corresponding - ; the first - represents the block limit, and the second represents the inode limit. The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has expired, none appears in its place. 17.2.3. Keeping Quotas Accurate When a file system fails to unmount cleanly, for example due to a system crash, it is necessary to run the following command: However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe methods for periodically running quotacheck include: Ensuring quotacheck runs on reboot Note This method works best for (busy) multiuser systems which are periodically rebooted. Save a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory or schedule one using the following command: The crontab -e command contains the touch /forcequotacheck command. This creates an empty forcequotacheck file in the root directory, which the system init script looks for at boot time. If it is found, the init script runs quotacheck . Afterward, the init script removes the /forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that quotacheck is run during the reboot. For more information about cron , see man cron . Running quotacheck in single user mode An alternative way to safely run quotacheck is to boot the system into single-user mode to prevent the possibility of data corruption in quota files and run the following commands: Running quotacheck on a running system If necessary, it is possible to run quotacheck on a machine during a time when no users are logged in, and thus have no open files on the file system being checked. Run the command quotacheck -vug file_system ; this command will fail if quotacheck cannot remount the given file_system as read-only. Note that, following the check, the file system will be remounted read-write. Warning Running quotacheck on a live file system mounted read-write is not recommended due to the possibility of quota file corruption. See man cron for more information about configuring cron .
[ "quotaoff -vaug", "quotaon", "quotaon -vaug", "quotaon -vug /home", "*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0", "repquota -a", "quotacheck", "crontab -e", "quotaoff -vug / file_system", "quotacheck -vug / file_system", "quotaon -vug / file_system" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-disk-quotas-managing
Chapter 5. Migration Toolkit for Virtualization 2.2
Chapter 5. Migration Toolkit for Virtualization 2.2 You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV). The release notes describe technical changes, new features and enhancements, and known issues. 5.1. Technical changes This release has the following technical changes: Setting the precopy time interval for warm migration You can set the time interval between snapshots taken during the precopy stage of warm migration. 5.2. New features and enhancements This release has the following features and improvements: Creating validation rules You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego , the Open Policy Agent native query language. Downloading logs by using the web console You can download logs for a migration plan or a migrated VM by using the MTV web console. Duplicating a migration plan by using the web console You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan. Archiving a migration plan by using the web console You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived. 5.3. Known issues This release has the following known issues: Certain Validation service issues do not block migration Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated , do not block migration. ( BZ#2025977 ) The following Validation service assessments do not block migration: Table 5.1. Issues that do not block migration Assessment Result The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported). The migrated VM will have a virtio disk if the source interface is not recognized. The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported). The migrated VM will have a virtio NIC if the source interface is not recognized. The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization. The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly. One or more of the VM's disks has an illegal or locked status condition. The migration will proceed but the disk transfer is likely to fail. The VM has a disk with a storage type other than image , and this is not currently supported by OpenShift Virtualization. The migration will proceed but the disk transfer is likely to fail. The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization. The migration will proceed but the disk transfer is likely to fail. The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization. The migrated VM will not have USB devices. The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization. The migrated VM will not have a watchdog device. The VM's status is not up or down . The migration will proceed but it might hang if the VM cannot be powered off. QEMU guest agent is not installed on migrated VMs The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. ( BZ#2018062 ) Missing resource causes error message in current.log file If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable. The following error appears in the missing resource's current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. ( BZ#2023260 ) Importer pod log is unavailable after warm migration Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. ( BZ#2016290 ) As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage. This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected. Deleting migration plan does not remove temporary resources. Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. ( BZ#2018974 ) You must archive a migration plan before deleting it in order to clean up the temporary resources. Unclear error status message for VM with no operating system The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. ( BZ#2008846 ) Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console. If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the MTV web console. The migration plan cannot be edited or duplicated. ( BZ#1986020 ) Log archive file includes logs of a deleted migration plan or VM If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. ( BZ#2023764 ) If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR If you delete a target VirtualMachine CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found . However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. ( BZ#2031529 )
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/release_notes/rn-22_release-notes
2.6. Storage Metadata Versions in Red Hat Virtualization
2.6. Storage Metadata Versions in Red Hat Virtualization Red Hat Virtualization stores information about storage domains as metadata on the storage domains themselves. Each major release of Red Hat Virtualization has seen improved implementations of storage metadata. V1 metadata (Red Hat Virtualization 2.x series) Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual disks. Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 KB, limiting the number of storage domains that can be in a pool. Template and virtual machine base images are read only. V1 metadata is applicable to NFS, iSCSI, and FC storage domains. V2 metadata (Red Hat Enterprise Virtualization 3.0) All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains. Physical volume names are no longer included in the metadata. Template and virtual machine base images are read only. V2 metadata is applicable to iSCSI, and FC storage domains. V3 metadata (Red Hat Enterprise Virtualization 3.1 and later) All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains. Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot. Support for unicode metadata is added, for non-English volume names. V3 metadata is applicable to NFS, GlusterFS, POSIX, iSCSI, and FC storage domains. V4 metadata (Red Hat Virtualization 4.1 and later) Support for QCOW2 compat levels - the QCOW image format includes a version number to allow introducing new features that change the image format so that it is incompatible with earlier versions. Newer QEMU versions (1.7 and above) support QCOW2 version 3, which is not backwards compatible, but introduces improvements such as zero clusters and improved performance. A new xleases volume to support VM leases - this feature adds the ability to acquire a lease per virtual machine on shared storage without attaching the lease to a virtual machine disk. A VM lease offers two important capabilities: Avoiding split-brain. Starting a VM on another host if the original host becomes non-responsive, which improves the availability of HA VMs. V5 metadata (Red Hat Virtualization 4.3 and later) Support for 4K (4096 byte) block storage. Support for variable SANLOCK allignments. Support for new properties: BLOCK_SIZE - stores the block size of the storage domain in bytes. ALIGNMENT - determines the formatting and size of the xlease volume. (1MB to 8MB). Determined by the maximum number of host to be supported (value provided by the user) and disk block size. For example: a 512b block size and support for 2000 hosts results in a 1MB xlease volume. A 4K block size with 2000 hosts results in a 8MB xlease volume. The default value of maximum hosts is 250, resulting in an xlease volume of 1MB for 4K disks. Deprecated properties: The LOGBLKSIZE , PHYBLKSIZE , MTIME , and POOL_UUID fields were removed from the storage domain metadata. The SIZE (size in blocks) field was replaced by CAP (size in bytes). Note You cannot boot from a 4K format disk, as the boot disk always uses a 512 byte emulation. The nfs format always uses 512 bytes.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/storage_metadata_versions_in_red_hat_enterprise_virtualization
14.7. Creating and Formatting New Images or Devices
14.7. Creating and Formatting New Images or Devices Create the new disk image filename of size size and format format . If a base image is specified with -o backing_file= filename , the image will only record differences between itself and the base image. The backing file will not be modified unless you use the commit command. No size needs to be specified in this case.
[ "qemu-img create [-f format ] [-o options ] filename [ size ]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-creating_and_formatting_new_images_or_devices
Chapter 3. Known issues
Chapter 3. Known issues ENTMQIC-1980 - Symbolic ports in HTTP listeners do not work When configuring a listener in the router with the http option enabled (for console or WebSocket access), the port attribute must be expressed numerically. Symbolic port names do not work with HTTP listeners. If a listener is configured as: It should be changed to: Revised on 2020-10-08 11:22:32 UTC
[ "listener { port: amqp http: yes }", "listener { port: 5672 http: yes }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_interconnect_1.9/known_issues
Chapter 11. Automatically building Dockerfiles with Build workers
Chapter 11. Automatically building Dockerfiles with Build workers Red Hat Quay supports building Dockerfiles using a set of worker nodes on OpenShift Container Platform or Kubernetes. Build triggers, such as GitHub webhooks, can be configured to automatically build new versions of your repositories when new code is committed. This document shows you how to enable Builds with your Red Hat Quay installation, and set up one more more OpenShift Container Platform or Kubernetes clusters to accept Builds from Red Hat Quay. 11.1. Setting up Red Hat Quay Builders with OpenShift Container Platform You must pre-configure Red Hat Quay Builders prior to using it with OpenShift Container Platform. 11.1.1. Configuring the OpenShift Container Platform TLS component The tls component allows you to control TLS configuration. Note Red Hat Quay does not support Builders when the TLS component is managed by the Red Hat Quay Operator. If you set tls to unmanaged , you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support Builders, you must add both the Quay route and the Builder route name to the SAN list in the certificate; alternatively you can use a wildcard. To add the builder route, use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name] 11.1.2. Preparing OpenShift Container Platform for Red Hat Quay Builders Prepare Red Hat Quay Builders for OpenShift Container Platform by using the following procedure. Prerequisites You have configured the OpenShift Container Platform TLS component. Procedure Enter the following command to create a project where Builds will be run, for example, builder : USD oc new-project builder Create a new ServiceAccount in the the builder namespace by entering the following command: USD oc create sa -n builder quay-builder Enter the following command to grant a user the edit role within the builder namespace: USD oc policy add-role-to-user -n builder edit system:serviceaccount:builder:quay-builder Enter the following command to retrieve a token associated with the quay-builder service account in the builder namespace. This token is used to authenticate and interact with the OpenShift Container Platform cluster's API server. USD oc sa get-token -n builder quay-builder Identify the URL for the OpenShift Container Platform cluster's API server. This can be found in the OpenShift Container Platform Web Console. Identify a worker node label to be used when schedule Build jobs . Because Build pods need to run on bare metal worker nodes, typically these are identified with specific labels. Check with your cluster administrator to determine exactly which node label should be used. Optional. If the cluster is using a self-signed certificate, you must get the Kube API Server's certificate authority (CA) to add to Red Hat Quay's extra certificates. Enter the following command to obtain the name of the secret containing the CA: USD oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains("openshift-apiserver-sa-token"))'.name Obtain the ca.crt key value from the secret in the OpenShift Container Platform Web Console. The value begins with "-----BEGIN CERTIFICATE-----"` . Import the CA to Red Hat Quay. Ensure that the name of this file matches K8S_API_TLS_CA . Create the following SecurityContextConstraints resource for the ServiceAccount : apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: quay-builder priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny volumes: - '*' allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' defaultAddCapabilities: null fsGroup: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: quay-builder-scc namespace: builder rules: - apiGroups: - security.openshift.io resourceNames: - quay-builder resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: quay-builder-scc namespace: builder subjects: - kind: ServiceAccount name: quay-builder roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: quay-builder-scc 11.1.3. Configuring Red Hat Quay Builders Use the following procedure to enable Red Hat Quay Builders. Procedure Ensure that your Red Hat Quay config.yaml file has Builds enabled, for example: FEATURE_BUILD_SUPPORT: True Add the following information to your Red Hat Quay config.yaml file, replacing each value with information that is relevant to your specific installation: BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: quay-redis-host REDIS_PASSWORD: quay-redis-password REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder K8S_API_SERVER: api.openshift.somehost.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 300m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 300m NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type NODE_SELECTOR_LABEL_VALUE: n1-standard-4 CONTAINER_RUNTIME: podman SERVICE_ACCOUNT_NAME: ***** SERVICE_ACCOUNT_TOKEN: ***** QUAY_USERNAME: quay-username QUAY_PASSWORD: quay-password WORKER_IMAGE: <registry>/quay-quay-builder WORKER_TAG: some_tag BUILDER_VM_CONTAINER_IMAGE: <registry>/quay-quay-builder-qemu-rhcos:v3.4.0 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 SSH_AUTHORIZED_KEYS: - ssh-rsa 12345 [email protected] - ssh-rsa 67890 [email protected] HTTP_PROXY: <http://10.0.0.1:80> HTTPS_PROXY: <http://10.0.0.1:80> NO_PROXY: <hostname.example.com> For more information about each configuration field, see 11.2. OpenShift Container Platform Routes limitations The following limitations apply when you are using the Red Hat Quay Operator on OpenShift Container Platform with a managed route component: Currently, OpenShift Container Platform Routes are only able to serve traffic to a single port. Additional steps are required to set up Red Hat Quay Builds. Ensure that your kubectl or oc CLI tool is configured to work with the cluster where the Red Hat Quay Operator is installed and that your QuayRegistry exists; the QuayRegistry does not have to be on the same bare metal cluster where Builders run. Ensure that HTTP/2 ingress is enabled on the OpenShift cluster by following these steps . The Red Hat Quay Operator creates a Route resource that directs gRPC traffic to the Build manager server running inside of the existing Quay pod, or pods. If you want to use a custom hostname, or a subdomain like <builder-registry.example.com> , ensure that you create a CNAME record with your DNS provider that points to the status.ingress[0].host of the create Route resource. For example: Using the OpenShift Container Platform UI or CLI, update the Secret referenced by spec.configBundleSecret of the QuayRegistry with the Build cluster CA certificate. Name the key extra_ca_cert_build_cluster.cert . Update the config.yaml file entry with the correct values referenced in the Builder configuration that you created when you configured Red Hat Quay Builders, and add the BUILDMAN_HOSTNAME CONFIGURATION FIELD: BUILDMAN_HOSTNAME: <build-manager-hostname> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 600 ORCHESTRATOR: REDIS_HOST: <quay_redis_host REDIS_PASSWORD: <quay_redis_password> REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder ... 1 The externally accessible server hostname which the build jobs use to communicate back to the Build manager. Default is the same as SERVER_HOSTNAME . For OpenShift Route , it is either status.ingress[0].host or the CNAME entry if using a custom hostname. BUILDMAN_HOSTNAME must include the port number, for example, somehost:443 for an OpenShift Container Platform Route, as the gRPC client used to communicate with the build manager does not infer any port if omitted. 11.3. Troubleshooting Builds The Builder instances started by the Build manager are ephemeral. This means that they will either get shut down by Red Hat Quay on timeouts or failure, or garbage collected by the control plane (EC2/K8s). In order to obtain the Build logs, you must do so while the Builds are running. 11.3.1. DEBUG config flag The DEBUG flag can be set to true in order to prevent the Builder instances from getting cleaned up after completion or failure. For example: EXECUTORS: - EXECUTOR: ec2 DEBUG: true ... - EXECUTOR: kubernetes DEBUG: true ... When set to true , the debug feature prevents the Build nodes from shutting down after the quay-builder service is done or fails. It also prevents the Build manager from cleaning up the instances by terminating EC2 instances or deleting Kubernetes jobs. This allows debugging Builder node issues. Debugging should not be set in a production cycle. The lifetime service still exists; for example, the instance still shuts down after approximately two hours. When this happens, EC2 instances are terminated, and Kubernetes jobs are completed. Enabling debug also affects the ALLOWED_WORKER_COUNT , because the unterminated instances and jobs still count toward the total number of running workers. As a result, the existing Builder workers must be manually deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds. Setting DEBUG will also affect ALLOWED_WORKER_COUNT , as the unterminated instances/jobs will still count towards the total number of running workers. This means the existing builder workers will need to manually be deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds. 11.3.2. Troubleshooting OpenShift Container Platform and Kubernetes Builds Use the following procedure to troubleshooting OpenShift Container Platform Kubernetes Builds. Procedure Create a port forwarding tunnel between your local machine and a pod running with either an OpenShift Container Platform cluster or a Kubernetes cluster by entering the following command: USD oc port-forward <builder_pod> 9999:2222 Establish an SSH connection to the remote host using a specified SSH key and port, for example: USD ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost Obtain the quay-builder service logs by entering the following commands: USD systemctl status quay-builder USD journalctl -f -u quay-builder 11.4. Setting up Github builds If your organization plans to have Builds be conducted by pushes to Github, or Github Enterprise, continue with Creating an OAuth application in GitHub .
[ "[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]", "oc new-project builder", "oc create sa -n builder quay-builder", "oc policy add-role-to-user -n builder edit system:serviceaccount:builder:quay-builder", "oc sa get-token -n builder quay-builder", "oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains(\"openshift-apiserver-sa-token\"))'.name", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: quay-builder priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny volumes: - '*' allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' defaultAddCapabilities: null fsGroup: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: quay-builder-scc namespace: builder rules: - apiGroups: - security.openshift.io resourceNames: - quay-builder resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: quay-builder-scc namespace: builder subjects: - kind: ServiceAccount name: quay-builder roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: quay-builder-scc", "FEATURE_BUILD_SUPPORT: True", "BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: quay-redis-host REDIS_PASSWORD: quay-redis-password REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder K8S_API_SERVER: api.openshift.somehost.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 300m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 300m NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type NODE_SELECTOR_LABEL_VALUE: n1-standard-4 CONTAINER_RUNTIME: podman SERVICE_ACCOUNT_NAME: ***** SERVICE_ACCOUNT_TOKEN: ***** QUAY_USERNAME: quay-username QUAY_PASSWORD: quay-password WORKER_IMAGE: <registry>/quay-quay-builder WORKER_TAG: some_tag BUILDER_VM_CONTAINER_IMAGE: <registry>/quay-quay-builder-qemu-rhcos:v3.4.0 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 SSH_AUTHORIZED_KEYS: - ssh-rsa 12345 [email protected] - ssh-rsa 67890 [email protected] HTTP_PROXY: <http://10.0.0.1:80> HTTPS_PROXY: <http://10.0.0.1:80> NO_PROXY: <hostname.example.com>", "kubectl get -n <namespace> route <quayregistry-name>-quay-builder -o jsonpath={.status.ingress[0].host}", "BUILDMAN_HOSTNAME: <build-manager-hostname> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 600 ORCHESTRATOR: REDIS_HOST: <quay_redis_host REDIS_PASSWORD: <quay_redis_password> REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder", "EXECUTORS: - EXECUTOR: ec2 DEBUG: true - EXECUTOR: kubernetes DEBUG: true", "oc port-forward <builder_pod> 9999:2222", "ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost", "systemctl status quay-builder", "journalctl -f -u quay-builder" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/build-support
Chapter 1. Red Hat Quay permissions model
Chapter 1. Red Hat Quay permissions model Red Hat Quay's permission model provides fine-grained access control over repositories and the content of those repositories, helping ensure secure collaboration and automation. Red Hat Quay administrators can grant users and robot accounts one of the following levels of access: Read : Allows users, robots, and teams to pull images. Write : Allows users, robots, and teams to push images. Admin : Provides users, robots, and teams administrative privileges. Note Administrative users can delegate new permissions for existing users and teams, change existing permissions, and revoke permissions when necessary Collectively, these levels of access provide users or robot accounts the ability to perform specific tasks, like pulling images, pushing new versions of an image into the registry, or managing the settings of a repository. These permissions can be delegated across the entire organization and on specific repositories. For example, Read permissions can be set to a specific team within the organization, while Admin permissions can be given to all users across all repositories within the organization. 1.1. Red Hat Quay teams overview In Red Hat Quay a team is a group of users with shared permissions, allowing for efficient management and collaboration on projects. Teams can help streamline access control and project management within organizations and repositories. They can be assigned designated permissions and help ensure that members have the appropriate level of access to their repositories based on their roles and responsibilities. 1.1.1. Setting a team role by using the UI After you have created a team, you can set the role of that team within the Organization. Prerequisites You have created a team. Procedure On the Red Hat Quay landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Admin . Full administrative access to the organization, including the ability to create teams, add members, and set permissions. Member . Inherits all permissions set for the team. Creator . All member permissions, plus the ability to create new repositories. 1.1.1.1. Managing team members and repository permissions Use the following procedure to manage team members and set repository permissions. On the Teams and membership page of your organization, you can also manage team members and set repository permissions. Click the kebab menu, and select one of the following options: Manage Team Members . On this page, you can view all members, team members, robot accounts, or users who have been invited. You can also add a new team member by clicking Add new member . Set repository permissions . On this page, you can set the repository permissions to one of the following: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Delete . This popup windows allows you to delete the team by clicking Delete . 1.1.2. Setting the role of a team within an organization by using the API Use the following procedure to view and set the role a team within an organization using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following GET /api/v1/organization/{orgname}/team/{teamname}/permissions command to return a list of repository permissions for the organization's team. Note that your team must have been added to a repository for this command to return information. USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions" Example output {"permissions": [{"repository": {"name": "api-repo", "is_public": true}, "role": "admin"}]} You can create or update a team within an organization to have a specified role of admin , member , or creator using the PUT /api/v1/organization/{orgname}/team/{teamname} command. For example: USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "role": "<role>" }' \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" Example output {"name": "testteam", "description": "", "can_view": true, "role": "creator", "avatar": {"name": "testteam", "hash": "827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8", "color": "#ff7f0e", "kind": "team"}, "new_team": false} 1.2. Creating and managing default permissions by using the UI Default permissions define permissions that should be granted automatically to a repository when it is created, in addition to the default of the repository's creator. Permissions are assigned based on the user who created the repository. Use the following procedure to create default permissions using the Red Hat Quay v2 UI. Procedure Click the name of an organization. Click Default permissions . Click Create default permissions . A toggle drawer appears. Select either Anyone or Specific user to create a default permission when a repository is created. If selecting Anyone , the following information must be provided: Applied to . Search, invite, or add a user/robot/team. Permission . Set the permission to one of Read , Write , or Admin . If selecting Specific user , the following information must be provided: Repository creator . Provide either a user or robot account. Applied to . Provide a username, robot account, or team name. Permission . Set the permission to one of Read , Write , or Admin . Click Create default permission . A confirmation box appears, returning the following alert: Successfully created default permission for creator . 1.3. Creating and managing default permissions by using the API Use the following procedures to manage default permissions using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a default permission with the POST /api/v1/organization/{orgname}/prototypes endpoint: USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" --data '{ "role": "<admin_read_or_write>", "delegate": { "name": "<username>", "kind": "user" }, "activating_user": { "name": "<robot_name>" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes Example output {"activating_user": {"name": "test-org+test", "is_robot": true, "kind": "user", "is_org_member": true, "avatar": {"name": "test-org+test", "hash": "aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370", "color": "#8c564b", "kind": "robot"}}, "delegate": {"name": "testuser", "is_robot": false, "kind": "user", "is_org_member": false, "avatar": {"name": "testuser", "hash": "f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a", "color": "#6b6ecf", "kind": "user"}}, "role": "admin", "id": "977dc2bc-bc75-411d-82b3-604e5b79a493"} Enter the following command to update a default permission using the PUT /api/v1/organization/{orgname}/prototypes/{prototypeid} endpoint, for example, if you want to change the permission type. You must include the ID that was returned when you created the policy. USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "role": "write" }' \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid> Example output {"activating_user": {"name": "test-org+test", "is_robot": true, "kind": "user", "is_org_member": true, "avatar": {"name": "test-org+test", "hash": "aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370", "color": "#8c564b", "kind": "robot"}}, "delegate": {"name": "testuser", "is_robot": false, "kind": "user", "is_org_member": false, "avatar": {"name": "testuser", "hash": "f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a", "color": "#6b6ecf", "kind": "user"}}, "role": "write", "id": "977dc2bc-bc75-411d-82b3-604e5b79a493"} You can delete the permission by entering the DELETE /api/v1/organization/{orgname}/prototypes/{prototypeid} command: curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id> This command does not return an output. Instead, you can obtain a list of all permissions by entering the GET /api/v1/organization/{orgname}/prototypes command: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes Example output {"prototypes": []} 1.4. Adjusting access settings for a repository by using the UI Use the following procedure to adjust access settings for a user or robot account for a repository using the v2 UI. Prerequisites You have created a user account or robot account. Procedure Log into Red Hat Quay. On the v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the Settings tab. Optional. Click User and robot permissions . You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions . You can change the settings to Read , Write , or Admin . Read . The User or Robot Account can view and pull from the repository. Write . The User or Robot Account can read (pull) from and write (push) to the repository. Admin . The User or Robot account has access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. 1.5. Adjusting access settings for a repository by using the API Use the following procedure to adjust access settings for a user or robot account for a repository by using the API. Prerequisites You have created a user account or robot account. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following PUT /api/v1/repository/{repository}/permissions/user/{username} command to change the permissions of a user: USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{"role": "admin"}' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> Example output {"role": "admin", "name": "quayadmin+test", "is_robot": true, "avatar": {"name": "quayadmin+test", "hash": "ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb", "color": "#8c564b", "kind": "robot"}} To delete the current permission, you can enter the DELETE /api/v1/repository/{repository}/permissions/user/{username} command: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> This command does not return any output in the CLI. Instead, you can check that the permissions were deleted by entering the GET /api/v1/repository/{repository}/permissions/user/ command: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/ Example output {"message":"User does not have permission for repo."}
[ "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"", "{\"permissions\": [{\"repository\": {\"name\": \"api-repo\", \"is_public\": true}, \"role\": \"admin\"}]}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"role\": \"<role>\" }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"", "{\"name\": \"testteam\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"testteam\", \"hash\": \"827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8\", \"color\": \"#ff7f0e\", \"kind\": \"team\"}, \"new_team\": false}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"admin\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"write\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"prototypes\": []}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "{\"role\": \"admin\", \"name\": \"quayadmin+test\", \"is_robot\": true, \"avatar\": {\"name\": \"quayadmin+test\", \"hash\": \"ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/", "{\"message\":\"User does not have permission for repo.\"}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/managing_access_and_permissions/role-based-access-control
2.3.2. Configuring a Single Storage-Based Fence Device for a Node
2.3.2. Configuring a Single Storage-Based Fence Device for a Node When using non-power fencing methods (that is, SAN/storage fencing) to fence a node, you must configure unfencing for the fence device. This ensures that a fenced node is not re-enabled until the node has been rebooted. When you configure unfencing for a node, you specify a device that mirrors the corresponding fence device you have configured for the node with the notable addition of the explicit action of on or enable . For more information about unfencing a node, see the fence_node (8) man page. Use the following procedure to configure a node with a single storage-based fence device that uses a fence device named sanswitch1 , which uses the fence_sanbox2 fencing agent. Add a fence method for the node, providing a name for the fence method. For example, to configure a fence method named SAN for the node node-01.example.com in the configuration file on the cluster node node-01.example.com , execute the following command: Add a fence instance for the method. You must specify the fence device to use for the node, the node this instance applies to, the name of the method, and any options for this method that are specific to this node: For example, to configure a fence instance in the configuration file on the cluster node node-01.example.com that uses the SAN switch power port 11 on the fence device named sanswitch1 to fence cluster node node-01.example.com using the method named SAN , execute the following command: To configure unfencing for the storage based fence device on this node, execute the following command: You will need to add a fence method for each node in the cluster. The following commands configure a fence method for each node with the method name SAN . The device for the fence method specifies sanswitch as the device name, which is a device previously configured with the --addfencedev option, as described in Section 2.1, "Configuring Fence Devices" . Each node is configured with a unique SAN physical port number: The port number for node-01.example.com is 11 , the port number for node-02.example.com is 12 , and the port number for node-03.example.com is 13 . Example 2.2, " cluster.conf After Adding Storage-Based Fence Methods " shows a cluster.conf configuration file after you have added fencing methods, fencing instances, and unfencing to each node in the cluster. Example 2.2. cluster.conf After Adding Storage-Based Fence Methods Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes.
[ "ccs -h host --addmethod method node", "ccs -h node01.example.com --addmethod SAN node01.example.com", "ccs -h host --addfenceinst fencedevicename node method [ options ]", "ccs -h node01.example.com --addfenceinst sanswitch1 node01.example.com SAN port=11", "ccs -h host --addunfence fencedevicename node action=on|off", "ccs -h node01.example.com --addmethod SAN node01.example.com ccs -h node01.example.com --addmethod SAN node02.example.com ccs -h node01.example.com --addmethod SAN node03.example.com ccs -h node01.example.com --addfenceinst sanswitch1 node01.example.com SAN port=11 ccs -h node01.example.com --addfenceinst sanswitch1 node02.example.com SAN port=12 ccs -h node01.example.com --addfenceinst sanswitch1 node03.example.com SAN port=13 ccs -h node01.example.com --addunfence sanswitch1 node01.example.com port=11 action=on ccs -h node01.example.com --addunfence sanswitch1 node02.example.com port=12 action=on ccs -h node01.example.com --addunfence sanswitch1 node03.example.com port=13 action=on", "<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"11\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"11\" action=\"on\"/> </unfence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"12\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"12\" action=\"on\"/> </unfence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"13\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"13\" action=\"on\"/> </unfence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_sanbox2\" ipaddr=\"san_ip_example\" login=\"login_example\" name=\"sanswitch1\" passwd=\"password_example\"/> </fencedevices> <rm> </rm> </cluster>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s2-single-storagefence-config-ccs-CA
Chapter 58. Installation and Booting
Chapter 58. Installation and Booting Selecting the Lithuanian language causes the installer to crash If you select the Lithuanian (Lietuviu) langauge on the first screen of the graphical installer and press Continue (Testi), the installer crashes and displays a traceback message. To work around this problem, either use a different language, or avoid the graphical installer and use a different approach such as the text mode or a Kickstart installation. (BZ# 1527319 ) oscap-anaconda-addon fails to remediate when installing in TUI using Kickstart The OpenSCAP Anaconda add-on fails to fully remediate a machine to the specified security policy when the system is installed using a Kickstart file that sets installation display mode to the text-based user interface (TUI) using the text Kickstart command. The problem occurs because packages required for the remediation are not installed. To work around this problem, you can either use the graphical installer or add packages required by the security policy to the %packages section of the Kickstart file manually. (BZ# 1547609 ) The grub2-mkimage command fails on UEFI systems by default The grub2-mkimage command may fail on UEFI systems with the following error message: This error is caused by a the package grub2-efi-x64-modules package missing from the system. The package is missing due to a known issue where it is not part of the default installation, and it is not marked as a dependency for grub2-tools which provides the grub2-mkimage command. The error also causes some other tools which depend on it, such as ReaR , to fail. To work around this problem, install the grub2-efi-x64-modules , either manually using Yum , or by adding it to the Kickstart file used for installing the system. (BZ# 1512493 ) Kernel panic during RHEL 7.5 installation on HPE BL920s Gen9 systems A known issue related to the fix for the Meltdown vulnerability causes a kernel panic with a NULL pointer dereference during the installation of Red Hat Enterprise Linux 7.5 on HPE BL920s Gen2 (Superdome 2) systems. When the problem appears, the following error message is displayed: Then the system reboots, or enters an otherwise faulty state. There are multiple possible workarounds for this problem: Add the nopti option to the kernel command line using the boot loader. Once the system finishes booting, upgrade to the latest RHEL 7.5 kernel. Install RHEL 7.4, and then upgrade to the latest RHEL 7.5 kernel. Install RHEL 7.5 on a single blade. Once the system is installed, upgrade to the latest RHEL 7.5 kernel, and then add additional blades as required. (BZ#1540061) The READONLY=yes option is not sufficient to configure a read-only system In Red Hat Enterprise Linux 6, the READONLY=yes option in the /etc/sysconfig/readonly-root file was used to configure a read-only system partition. In Red Hat Enterprise Linux 7, the option is no longer sufficient, because systemd uses a new approach to mounting the system partition. To configure a read-only system in Red Hat Enterprise Linux 7: Set the READONLY=yes option in /etc/sysconfig/readonly-root . Add the ro option to the root mount point in the /etc/fstab file. (BZ# 1444018 )
[ "error: cannot open `/usr/lib/grub/x86_64-efi/moddep.lst': No such file or directory.", "WARNING: CPU: 576 PID: 3924 at kernel/workqueue.c:1518__queue_delayed_work+0x184/0x1a0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/known_issues_installation_and_booting
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. To increase an account limit, file a support request on the Azure portal. For more information, see Request a quota limit increase for Azure Deployment Environments resources . Additional resources Optimizing storage 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources About the Cloud Credential Operator 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster on Azure Stack Hub with customizations Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Chapter 6. Installing a cluster on vSphere with network customizations
Chapter 6. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.12, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 6.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 6.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 6.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 6.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 6.5.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 6.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 6.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 6.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you specify nodes or groups of nodes on different VLANs for a cluster that you want to install on user-provisioned infrastructure, you must ensure that machines in your cluster meet the requirements outlined in the "Network connectivity requirements" section of the Networking requirements for user-provisioned infrastructure document. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 6.5.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 6.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 6.5.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.5.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 6.5.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 6.5.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 6.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 6.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 6.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 6.5.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 6.5.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 6.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 6.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 6.5.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 6.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 6.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 6.5.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 6.6. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 6.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 6.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 6.12. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 6.10. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.11. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 6.11.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 16 The pull secret that you obtained from OpenShift Cluster Manager Hybrid Cloud Console . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 6.11.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.11.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 6.12. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the network plugin during phase 2. 6.13. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 6.14. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.14.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.13. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.14. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 6.15. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.16. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 6.17. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.18. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 6.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.15. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 6.16. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 6.17. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 6.18. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 6.19. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 6.20. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.22.1. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 6.22.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 6.22.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 6.22.3.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 6.23. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 6.24. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 6.25. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2", "govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster", "[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK", "govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vsphere/installing-vsphere-network-customizations
4.12. Configuring ACPI For Use with Integrated Fence Devices
4.12. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off: Section 4.12.1, "Disabling ACPI Soft-Off with chkconfig Management" - Preferred method Section 4.12.2, "Disabling ACPI Soft-Off with the BIOS" - First alternate method Section 4.12.3, "Disabling ACPI Completely in the grub.conf File" - Second alternate method 4.12.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. For information on testing a fence device, see How to test fence devices and fencing configuration in a RHEL 5, 6, or 7 High Availability cluster? .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-acpi-ca
A.3. Synchronizing Quotas with the gfs2_quota Command
A.3. Synchronizing Quotas with the gfs2_quota Command GFS2 stores all quota information in its own internal file on disk. A GFS2 node does not update this quota file for every file system write; rather, by default it updates the quota file once every 60 seconds. This is necessary to avoid contention among nodes writing to the quota file, which would cause a slowdown in performance. As a user or group approaches their quota limit, GFS2 dynamically reduces the time between its quota-file updates to prevent the limit from being exceeded. The normal time period between quota synchronizations is a tunable parameter, quota_quantum . You can change this from its default value of 60 seconds using the quota_quantum= mount option, as described in Table 4.2, "GFS2-Specific Mount Options" . The quota_quantum parameter must be set on each node and each time the file system is mounted. Changes to the quota_quantum parameter are not persistent across unmounts. You can update the quota_quantum value with the mount -o remount . You can use the gfs2_quota sync command to synchronize the quota information from a node to the on-disk quota file between the automatic updates performed by GFS2. Usage Synchronizing Quota Information MountPoint Specifies the GFS2 file system to which the actions apply. Tuning the Time Between Synchronizations MountPoint Specifies the GFS2 file system to which the actions apply. secs Specifies the new time period between regular quota-file synchronizations by GFS2. Smaller values may increase contention and slow down performance. Examples This example synchronizes the quota information from the node it is run on to file system /mygfs2 . This example changes the default time period between regular quota-file updates to one hour (3600 seconds) for file system /mnt/mygfs2 when remounting that file system on logical volume /dev/volgroup/logical_volume .
[ "gfs2_quota sync -f MountPoint", "mount -o quota_quantum= secs ,remount BlockDevice MountPoint", "gfs2_quota sync -f /mygfs2", "mount -o quota_quantum=3600,remount /dev/volgroup/logical_volume /mnt/mygfs2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-quotaapp-synchquota
6.4. Setting the Size of the DN Cache
6.4. Setting the Size of the DN Cache The entryrdn index is used to associate DNs and RDNs with entries. It enables the server to efficiently perform subtree rename , entry move , and moddn operations. The DN cache is used to cache the in-memory representation of the entryrdn index to avoid expensive file I/O and transformation operations. For best performance, especially with but not limited to entry rename and move operations, set the DN cache to a size that enables Directory Server to cache all DNs in the database. If a DN is not stored in the cache, Directory Server reads the DN from the entryrdn.db index database file and converts the DNs from the on-disk format to the in-memory format. DNs that are stored in the cache enable the server to skip the disk I/O and conversion steps. 6.4.1. Setting the Size of the DN Cache Using the Command Line To set the DN cache size of a database using the command line: Display the suffixes and their corresponding back end: This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. To disable database and entry cache auto-sizing, enter: To set the DN cache size, enter: This command sets the DN cache for the userRoot database to 20 megabytes. Restart the Directory Service instance: 6.4.2. Setting the Size of the DN Cache Using the Web Console To set DN cache size of a database using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select the suffix for which you want to set the DN cache size. Enter the size in bytes into the DN Cache Size (bytes) field. Click Save Configuration . Click the Actions button, and select Restart Instance .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com suffix list dc=example,dc=com ( userroot )", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --dncache-memsize= 20971520 userRoot", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-dn-cache
12.5. Removing an Object Class
12.5. Removing an Object Class This section describes how to delete an object class using the command line and the web console. Warning Do not delete default schema elements. They are required by Directory Server. 12.5.1. Removing an Object Class Using the Command Line Use the ldapmodify utility to delete an object class entry. For example, to delete the examplePerson object class: Remove the unwanted attributes from any entries that use them. For details, see Section 12.8, "Removing an Attribute" . Delete the object class entry: For further details about object class definitions, see Section 12.1.2, "Object Classes" . 12.5.2. Removing an Object Class Using the Web Console To remove an object class using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select Schema Objectclasses . Click the Choose Action button to the object class entry you want to remove. Select Delete Objectclass . Click Yes to confirm.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com schema objectclasses delete examplePerson" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/removing_an_object_class
Chapter 4. Determining hardware and OS configuration
Chapter 4. Determining hardware and OS configuration CPU The more physical cores that are available to Satellite, the higher throughput can be achieved for the tasks. Some of the Satellite components such as Puppet and PostgreSQL are CPU intensive applications and can really benefit from the higher number of available CPU cores. Memory The higher amount of memory available in the system running Satellite, the better will be the response times for the Satellite operations. Since Satellite uses PostgreSQL as the database solutions, any additional memory coupled with the tunings will provide a boost to the response times of the applications due to increased data retention in the memory. Disk With Satellite doing heavy IOPS due to repository synchronizations, package data retrieval, high frequency database updates for the subscription records of the content hosts, it is advised that Satellite be installed on a high speed SSD so as to avoid performance bottlenecks which may happen due to increased disk reads or writes. Satellite requires disk IO to be at or above 60 - 80 megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the Satellite. Satellite components such as PostgreSQL benefit from using SSDs due to their lower latency compared to HDDs. Network The communication between the Satellite Server and Capsules is impacted by the network performance. A decent network with a minimum jitter and low latency is required to enable hassle free operations such as Satellite Server and Capsules synchronization (at least ensure it is not causing connection resets, etc). Server Power Management Your server by default is likely to be configured to conserve power. While this is a good approach to keep the max power consumption in check, it also has a side effect of lowering the performance that Satellite may be able to achieve. For a server running Satellite, it is recommended to set the BIOS to enable the system to be run in performance mode to boost the maximum performance levels that Satellite can achieve. 4.1. Benchmarking disk performance We are working to update satellite-maintain to only warn the users when its internal quick storage benchmark results in numbers below our recommended throughput. Also working on an updated benchmark script you can run (which will likely be integrated into satellite-maintain in the future) to get a more accurate real-world storage information. Note You may have to temporarily reduce the RAM in order to run the I/O benchmark. For example, if your Satellite Server has 256 GiB RAM, tests would require 512 GiB of storage to run. As a workaround, you can add mem=20G kernel option in grub during system boot to temporary reduce the size of the RAM. The benchmark creates a file twice the size of the RAM in the specified directory and executes a series of storage I/O tests against it. The size of the file ensures that the test is not just testing the filesystem caching. If you benchmark other filesystems, for example smaller volumes such as PostgreSQL storage, you might have to reduce the RAM size as described above. If you are using different storage solutions such as SAN or iSCSI, you can expect a different performance. Red Hat recommends you to stop all services before executing this script and you will be prompted to do so. This test does not use direct I/O and will utilize file caching as normal operations would. You can find our first version of the script storage-benchmark . To execute it, just download the script to your Satellite, make it executable, and run: As noted in the README block in the script, generally you wish to see on average 100MB/sec or higher in the tests below: Local SSD based storage should give values of 600MB/sec or higher. Spinning disks should give values in the range of 100 - 200MB/sec or higher. If you see values below this, please open a support ticket for assistance. For more information, see Impact of Disk Speed on Satellite Operations . 4.2. Enabling tuned profiles On bare-metal, Red Hat recommends to run the throughput-performance tuned profile on Satellite Server and Capsules. On virtual machines, Red Hat recommends to run the virtual-guest profile. Procedure Check if tuned is running: If tuned is not running, enable it: Optional: View a list of available tuned profiles: Enable a tuned profile depending on your scenario: 4.3. Disable Transparent Hugepage Transparent Hugepage is a memory management technique used by the Linux kernel to reduce the overhead of using the Translation Lookaside Buffer (TLB) by using larger sized memory pages. Due to databases having Sparse Memory Access patterns instead of Contiguous Memory access patterns, database workloads often perform poorly when Transparent Hugepage is enabled. To improve PostgreSQL and Redis performance, disable Transparent Hugepage. In deployments where the databases are running on separate servers, there may be a small benefit to using Transparent Hugepage on the Satellite Server only. For more information on how to disable Transparent Hugepage, see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux .
[ "./storage-benchmark /var/lib/pulp", "systemctl status tuned", "systemctl enable --now tuned", "tuned-adm list", "tuned-adm profile \" My_Tuned_Profile \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/tuning_performance_of_red_hat_satellite/determining_hardware_and_os_configuration_performance-tuning
Chapter 1. About Red Hat Discovery
Chapter 1. About Red Hat Discovery Red Hat Discovery is designed to help users collect data about their usage of specific Red Hat software. By using Discovery, users can reduce the amount of time and effort that is required to calculate and report usage of those Red Hat products. Learn more To learn more about the purpose, benefits, and characteristics of Discovery, see the following information: What is Red Hat Discovery? To learn more about the products and product versions that Discovery can find and inspect, see the following information: What products does Red Hat Discovery find? To evaluate whether Discovery is a correct solution for you, see the following information: Is Red Hat Discovery right for me? 1.1. What is Red Hat Discovery? Red Hat Discovery is an inspection and reporting tool. It is designed to find, identify, and report environment data, or facts, such as the number of physical and virtual systems on a network, their operating systems, and other configuration data. In addition, it is designed to find, identify, and report more detailed facts for some versions of key Red Hat packages and products for the IT resources in that network. The ability to inspect the software and systems that are running on your network improves your ability to understand and report on your subscription usage. Ultimately, this inspection and reporting process is part of the larger system administration task of managing your inventories. Red Hat Discovery requires the configuration of two basic structures to access IT resources and run the inspection process. A credential contains user access data, such as the username and password or SSH key of a user with sufficient authority to run the inspection process on a particular source or some of the assets on that source. A source contains data about a single asset or multiple assets that are to be inspected. These assets can be physical machines, virtual machines, or containers, identified as hostnames, IP addresses, IP ranges, or subnets. These assets can also be a systems management solution such as vCenter Server or Red Hat Satellite Server, or can be clusters deployed on Red Hat OpenShift Container Platform. Note Currently, the only virtualized deployment that discovery can scan with a specialized source for virtualization infrastructure is VMware vCenter. No other virtualization infrastructure that is supported by Red Hat can be scanned with a specialized scan. General scans of your network might still find these assets, without the precise metadata returned by a specialized scan. You can save multiple credentials and sources to use with Discovery in various combinations as you run inspection processes, or scans . When you have completed a scan, you can access these facts in the output as a collection of formatted data, or report , to review the results. By default, the credentials and sources that are created during the use of Red Hat Discovery are encrypted in a database. The values are encrypted with AES-256 encryption. They are decrypted when the Red Hat Discovery server runs a scan with the use of a vault password to access the encrypted values that are stored in the database. Red Hat Discovery is an agentless inspection tool, so there is no need to install the tool on every source that is to be inspected. However, the system that Discovery is installed on must have access to the systems to be discovered and inspected. 1.2. What products does Red Hat Discovery find? Red Hat Discovery finds the following Red Hat products. For each version or release, the earliest version is listed, with later releases indicated as applicable. If a product has changed names recently so that you might be more familiar with the current name for that product, that name is provided as additional information. No later version is implied by the inclusion of a newer product name unless specific versions of that product are also listed. Red Hat Enterprise Linux Red Hat Enterprise Linux version 5 and later Red Hat Enterprise Linux version 6 and later Red Hat Enterprise Linux version 7 and later Red Hat Enterprise Linux version 8 and later Red Hat Enterprise Linux version 9 and later Red Hat Application Services products (formerly Red Hat Middleware) JBoss Enterprise Web Server version 1 and later; Red Hat JBoss Web Server 3.0.1 and later Red Hat JBoss Enterprise Application Platform version 4.2 and later, version 4.3 and later, version 5 and later, version 6 and later, version 7 and later Red Hat Fuse version 6.0 and later Red Hat Ansible Automation Platform Ansible Automation Platform version 2 and later Red Hat OpenShift Container Platform Red Hat OpenShift Container Platform version 4 and later Red Hat Advanced Cluster Security for Kubernetes Red Hat Advanced Cluster Security for Kubernetes version 4 and later Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management for Kubernetes version 2 and later 1.3. Is Red Hat Discovery right for me? Red Hat Discovery is intended to help you find and understand your Red Hat product inventory, including unknown product usage across complex networks. The reports generated by Discovery are best understood through your partnership with a Red Hat Solution Architect (SA) or Technical Account Manager (TAM) or through the analysis and assistance supplied by the Subscription Education and Awareness Program (SEAP). Although you can install and use Discovery independently and then generate and view report data, the Discovery documentation does not provide any information to help you interpret report results. In addition, although Red Hat Support can provide some basic assistance related to installation and usage of Discovery, the support team does not provide any assistance to help you understand the reports. The Discovery tool does not automatically share data directly with Red Hat. Instead, you choose whether to prepare and send report data to Red Hat for ingestion by Red Hat tools and services. You can use the Discovery tool locally to scan your network for the Red Hat products that Discovery currently supports and then use the generated reports for your own internal purposes.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/using_red_hat_discovery/assembly-about-common
Configuring dynamic plugins
Configuring dynamic plugins Red Hat Developer Hub 1.3 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/configuring_dynamic_plugins/index
6.7. Duplicate PV Warnings for Multipathed Devices
6.7. Duplicate PV Warnings for Multipathed Devices When using LVM with multipathed storage, some LVM commands (such as vgs or lvchange ) may display messages such as the following when listing a volume group or logical volume. After providing information on the root cause for these warnings, this section describes how to address this issue in the following two cases. The two devices displayed in the output are both single paths to the same device The two devices displayed in the output are both multipath maps 6.7.1. Root Cause of Duplicate PV Warning With a default configuration, LVM commands will scan for devices in /dev and check every resulting device for LVM metadata. This is caused by the default filter in the /etc/lvm/lvm.conf , which is as follows: When using Device Mapper Multipath or other multipath software such as EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a particular logical unit number (LUN) is registered as a different SCSI device, such as /dev/sdb or /dev/sdc . The multipath software will then create a new device that maps to those individual paths, such as /dev/mapper/mpath1 or /dev/mapper/mpatha for Device Mapper Multipath, /dev/emcpowera for EMC PowerPath, or /dev/sddlmab for Hitachi HDLM. Since each LUN has multiple device nodes in /dev that point to the same underlying data, they all contain the same LVM metadata and thus LVM commands will find the same metadata multiple times and report them as duplicates. These duplicate messages are only warnings and do not mean the LVM operation has failed. Rather, they are alerting the user that only one of the devices has been used as a physical volume and the others are being ignored. If the messages indicate the incorrect device is being chosen or if the warnings are disruptive to users, then a filter can be applied to search only the necessary devices for physical volumes, and to leave out any underlying paths to multipath devices. 6.7.2. Duplicate Warnings for Single Paths The following example shows a duplicate PV warning in which the duplicate devices displayed are both single paths to the same device. In this case, both /dev/sdd and /dev/sdf can be found under the same multipath map in the output to the multipath -ll command. To prevent this warning from appearing, you can configure a filter in the /etc/lvm/lvm.conf file to restrict the devices that LVM will search for metadata. The filter is a list of patterns that will be applied to each device found by a scan of /dev (or the directory specified by the dir keyword in the /etc/lvm/lvm.conf file). Patterns are regular expressions delimited by any character and preceded by a (for accept) or r (for reject). The list is traversed in order, and the first regex that matches a device determines if the device will be accepted or rejected (ignored). Devices that don't match any patterns are accepted. For general information on LVM filters, see Section 4.5, "Controlling LVM Device Scans with Filters" . The filter you configure should include all devices that need to be checked for LVM metadata, such as the local hard drive with the root volume group on it and any multipathed devices. By rejecting the underlying paths to a multipath device (such as /dev/sdb , /dev/sdd , and so on) you can avoid these duplicate PV warnings, since each unique metadata area will only be found once on the multipath device itself. The following examples show filters that will avoid duplicate PV warnings due to multiple storage paths being available. This filter accepts the second partition on the first hard drive ( /dev/sda and any device-mapper-multipath devices, while rejecting everything else. This filter accepts all HP SmartArray controllers and any EMC PowerPath devices. This filter accepts any partitions on the first IDE drive and any multipath devices. Note When adding a new filter to the /etc/lvm/lvm.conf file, ensure that the original filter is either commented out with a # or is removed. Once a filter has been configured and the /etc/lvm/lvm.conf file has been saved, check the output of these commands to ensure that no physical volumes or volume groups are missing. You can also test a filter on the fly, without modifying the /etc/lvm/lvm.conf file, by adding the --config argument to the LVM command, as in the following example. Note Testing filters using the --config argument will not make permanent changes to the server's configuration. Make sure to include the working filter in the /etc/lvm/lvm.conf file after testing. After configuring an LVM filter, it is recommended that you rebuild the initrd device with the dracut command so that only the necessary devices are scanned upon reboot. 6.7.3. Duplicate Warnings for Multipath Maps The following examples show a duplicate PV warning for two devices that are both multipath maps. In these examples we are not looking at two different paths, but two different devices. This situation is more serious than duplicate warnings for devices that are both single paths to the same device, since these warnings often mean that the machine has been presented devices which it should not be seeing (for example, LUN clones or mirrors). In this case, unless you have a clear idea of what devices should be removed from the machine, the situation could be unrecoverable. It is recommended that you contact Red Hat Technical Support to address this issue.
[ "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf", "filter = [ \"a/.*/\" ]", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/sdd** not **/dev/sdf**", "filter = [ \"a|/dev/sda2USD|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/cciss/.*|\", \"a|/dev/emcpower.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/hda.*|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]", "pvscan vgscan", "lvs --config 'devices{ filter = [ \"a|/dev/emcpower.*|\", \"r|.*|\" ] }'", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/mapper/mpatha** not **/dev/mapper/mpathc**", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/emcpowera** not **/dev/emcpowerh**" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/duplicate_pv_multipath
Chapter 41. identity
Chapter 41. identity This chapter describes the commands under the identity command. 41.1. identity provider create Create new identity provider Usage: Table 41.1. Positional arguments Value Summary <name> New identity provider name (must be unique) Table 41.2. Command arguments Value Summary -h, --help Show this help message and exit --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --description <description> New identity provider description --domain <domain> Domain to associate with the identity provider. if not specified, a domain will be created automatically. (Name or ID) --enable Enable identity provider (default) --disable Disable the identity provider Table 41.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 41.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.2. identity provider delete Delete identity provider(s) Usage: Table 41.7. Positional arguments Value Summary <identity-provider> Identity provider(s) to delete Table 41.8. Command arguments Value Summary -h, --help Show this help message and exit 41.3. identity provider list List identity providers Usage: Table 41.9. Command arguments Value Summary -h, --help Show this help message and exit --id <id> The identity providers' id attribute --enabled The identity providers that are enabled will be returned Table 41.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 41.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 41.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.4. identity provider set Set identity provider properties Usage: Table 41.14. Positional arguments Value Summary <identity-provider> Identity provider to modify Table 41.15. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set identity provider description --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --enable Enable the identity provider --disable Disable the identity provider 41.5. identity provider show Display identity provider details Usage: Table 41.16. Positional arguments Value Summary <identity-provider> Identity provider to display Table 41.17. Command arguments Value Summary -h, --help Show this help message and exit Table 41.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 41.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 41.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack identity provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-id <remote-id> | --remote-id-file <file-name>] [--description <description>] [--domain <domain>] [--enable | --disable] <name>", "openstack identity provider delete [-h] <identity-provider> [<identity-provider> ...]", "openstack identity provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--id <id>] [--enabled]", "openstack identity provider set [-h] [--description <description>] [--remote-id <remote-id> | --remote-id-file <file-name>] [--enable | --disable] <identity-provider>", "openstack identity provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <identity-provider>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/identity
Appendix A. Frequently asked questions
Appendix A. Frequently asked questions A.1. Questions related to the Cluster Operator A.1.1. Why do I need cluster administrator privileges to install AMQ Streams? To install AMQ Streams, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to AMQ Streams, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? A.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by AMQ Streams, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . A.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. A.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. A.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", "");
[ "2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/appendix-faq_str
probe::tty.release
probe::tty.release Name probe::tty.release - Called when the tty is closed Synopsis tty.release Values inode_flags the inode flags file_flags the file flags file_name the file name inode_state the inode state inode_number the inode number file_mode the file mode
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-release
Chapter 53. JSONPath
Chapter 53. JSONPath Camel supports JSONPath to allow using Expression or Predicate on JSON messages. 53.1. Dependencies When using jsonpath with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jsonpath-starter</artifactId> </dependency> 53.2. JSONPath Options The JSONPath language supports 8 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). suppressExceptions Boolean Whether to suppress exceptions such as PathNotFoundException. allowSimple Boolean Whether to allow in inlined Simple exceptions in the JSONPath expression. allowEasyPredicate Boolean Whether to allow using the easy predicate parser to pre-parse predicates. writeAsString Boolean Whether to write the output of each row/element as a JSON String value instead of a Map/POJO value. headerName String Name of header to use as input, instead of the message body. option Enum To configure additional options on JSONPath. Multiple values can be separated by comma. Enum values: DEFAULT_PATH_LEAF_TO_NULL ALWAYS_RETURN_LIST AS_PATH_LIST SUPPRESS_EXCEPTIONS REQUIRE_PROPERTIES trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 53.3. Examples For example, you can use JSONPath in a Predicate with the Content Based Router EIP . from("queue:books.new") .choice() .when().jsonpath("USD.store.book[?(@.price < 10)]") .to("jms:queue:book.cheap") .when().jsonpath("USD.store.book[?(@.price < 30)]") .to("jms:queue:book.average") .otherwise() .to("jms:queue:book.expensive"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath>USD.store.book[?(@.price &lt; 10)]</jsonpath> <to uri="mock:cheap"/> </when> <when> <jsonpath>USD.store.book[?(@.price &lt; 30)]</jsonpath> <to uri="mock:average"/> </when> <otherwise> <to uri="mock:expensive"/> </otherwise> </choice> </route> 53.4. JSONPath Syntax Using the JSONPath syntax takes some time to learn, even for basic predicates. So for example to find out all the cheap books you have to do: USD.store.book[?(@.price < 20)] 53.4.1. Easy JSONPath Syntax However, what if you could just write it as: store.book.price < 20 And you can omit the path if you just want to look at nodes with a price key: price < 20 To support this there is a EasyPredicateParser which kicks-in if you have defined the predicate using a basic style. That means the predicate must not start with the USD sign, and only include one operator. The easy syntax is: left OP right You can use Camel simple language in the right operator, eg: store.book.price < USD{header.limit} See the JSONPath project page for more syntax examples. 53.5. Supported message body types Camel JSonPath supports message body using the following types: Type Comment File Reading from files String Plain strings Map Message bodies as java.util.Map types List Message bodies as java.util.List types POJO Optional If Jackson is on the classpath, then camel-jsonpath is able to use Jackson to read the message body as POJO and convert to java.util.Map which is supported by JSonPath. For example, you can add camel-jackson as dependency to include Jackson. InputStream If none of the above types matches, then Camel will attempt to read the message body as a java.io.InputStream . If a message body is of unsupported type then an exception is thrown by default, however you can configure JSonPath to suppress exceptions (see below) 53.6. Suppressing exceptions By default, jsonpath will throw an exception if the json payload does not have a valid path accordingly to the configured jsonpath expression. In some use-cases you may want to ignore this in case the json payload contains optional data. Therefore, you can set the option suppressExceptions to true to ignore this as shown: from("direct:start") .choice() // use true to suppress exceptions .when().jsonpath("person.middlename", true) .to("mock:middle") .otherwise() .to("mock:other"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath suppressExceptions="true">person.middlename</jsonpath> <to uri="mock:middle"/> </when> <otherwise> <to uri="mock:other"/> </otherwise> </choice> </route> This option is also available on the @JsonPath annotation. 53.7. Inline Simple expressions It's possible to inlined Simple language in the JSONPath expression using the simple syntax USD{xxx} . An example is shown below: from("direct:start") .choice() .when().jsonpath("USD.store.book[?(@.price < USD{header.cheap})]") .to("mock:cheap") .when().jsonpath("USD.store.book[?(@.price < USD{header.average})]") .to("mock:average") .otherwise() .to("mock:expensive"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath>USD.store.book[?(@.price &lt; USD{header.cheap})]</jsonpath> <to uri="mock:cheap"/> </when> <when> <jsonpath>USD.store.book[?(@.price &lt; USD{header.average})]</jsonpath> <to uri="mock:average"/> </when> <otherwise> <to uri="mock:expensive"/> </otherwise> </choice> </route> You can turn off support for inlined Simple expression by setting the option allowSimple to false as shown: .when().jsonpath("USD.store.book[?(@.price < 10)]", false, false) And in XML DSL: <jsonpath allowSimple="false">USD.store.book[?(@.price &lt; 10)]</jsonpath> 53.8. JSonPath injection You can use Bean Integration to invoke a method on a bean and use various languages such as JSONPath (via the @JsonPath annotation) to extract a value from the message and bind it to a method parameter, as shown below: public class Foo { @Consume("activemq:queue:books.new") public void doSomething(@JsonPath("USD.store.book[*].author") String author, @Body String json) { // process the inbound message here } } 53.9. Encoding Detection The encoding of the JSON document is detected automatically, if the document is encoded in unicode (UTF-8, UTF-16LE, UTF-16BE, UTF-32LE, UTF-32BE ) as specified in RFC-4627. If the encoding is a non-unicode encoding, you can either make sure that you enter the document in String format to JSONPath, or you can specify the encoding in the header CamelJsonPathJsonEncoding which is defined as a constant in: JsonpathConstants.HEADER_JSON_ENCODING . 53.10. Split JSON data into sub rows as JSON You can use JSONPath to split a JSON document, such as: from("direct:start") .split().jsonpath("USD.store.book[*]") .to("log:book"); Then each book is logged, however the message body is a Map instance. Sometimes you may want to output this as plain String JSON value instead, which can be done with the writeAsString option as shown: from("direct:start") .split().jsonpathWriteAsString("USD.store.book[*]") .to("log:book"); Then each book is logged as a String JSON value. 53.11. Using header as input By default, JSONPath uses the message body as the input source. However, you can also use a header as input by specifying the headerName option. For example to count the number of books from a JSON document that was stored in a header named books you can do: from("direct:start") .setHeader("numberOfBooks") .jsonpath("USD..store.book.length()", false, int.class, "books") .to("mock:result"); In the jsonpath expression above we specify the name of the header as books , and we also told that we wanted the result to be converted as an integer by int.class . The same example in XML DSL would be: <route> <from uri="direct:start"/> <setHeader name="numberOfBooks"> <jsonpath headerName="books" resultType="int">USD..store.book.length()</jsonpath> </setHeader> <to uri="mock:result"/> </route> 53.12. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.language.jsonpath.allow-easy-predicate Whether to allow using the easy predicate parser to pre-parse predicates. true Boolean camel.language.jsonpath.allow-simple Whether to allow in inlined Simple exceptions in the JSONPath expression. true Boolean camel.language.jsonpath.enabled Whether to enable auto configuration of the jsonpath language. This is enabled by default. Boolean camel.language.jsonpath.header-name Name of header to use as input, instead of the message body. String camel.language.jsonpath.option To configure additional options on JSONPath. Multiple values can be separated by comma. String camel.language.jsonpath.suppress-exceptions Whether to suppress exceptions such as PathNotFoundException. false Boolean camel.language.jsonpath.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.jsonpath.write-as-string Whether to write the output of each row/element as a JSON String value instead of a Map/POJO value. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jsonpath-starter</artifactId> </dependency>", "from(\"queue:books.new\") .choice() .when().jsonpath(\"USD.store.book[?(@.price < 10)]\") .to(\"jms:queue:book.cheap\") .when().jsonpath(\"USD.store.book[?(@.price < 30)]\") .to(\"jms:queue:book.average\") .otherwise() .to(\"jms:queue:book.expensive\");", "<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath>USD.store.book[?(@.price &lt; 10)]</jsonpath> <to uri=\"mock:cheap\"/> </when> <when> <jsonpath>USD.store.book[?(@.price &lt; 30)]</jsonpath> <to uri=\"mock:average\"/> </when> <otherwise> <to uri=\"mock:expensive\"/> </otherwise> </choice> </route>", "USD.store.book[?(@.price < 20)]", "store.book.price < 20", "price < 20", "left OP right", "store.book.price < USD{header.limit}", "from(\"direct:start\") .choice() // use true to suppress exceptions .when().jsonpath(\"person.middlename\", true) .to(\"mock:middle\") .otherwise() .to(\"mock:other\");", "<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath suppressExceptions=\"true\">person.middlename</jsonpath> <to uri=\"mock:middle\"/> </when> <otherwise> <to uri=\"mock:other\"/> </otherwise> </choice> </route>", "from(\"direct:start\") .choice() .when().jsonpath(\"USD.store.book[?(@.price < USD{header.cheap})]\") .to(\"mock:cheap\") .when().jsonpath(\"USD.store.book[?(@.price < USD{header.average})]\") .to(\"mock:average\") .otherwise() .to(\"mock:expensive\");", "<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath>USD.store.book[?(@.price &lt; USD{header.cheap})]</jsonpath> <to uri=\"mock:cheap\"/> </when> <when> <jsonpath>USD.store.book[?(@.price &lt; USD{header.average})]</jsonpath> <to uri=\"mock:average\"/> </when> <otherwise> <to uri=\"mock:expensive\"/> </otherwise> </choice> </route>", ".when().jsonpath(\"USD.store.book[?(@.price < 10)]\", false, false)", "<jsonpath allowSimple=\"false\">USD.store.book[?(@.price &lt; 10)]</jsonpath>", "public class Foo { @Consume(\"activemq:queue:books.new\") public void doSomething(@JsonPath(\"USD.store.book[*].author\") String author, @Body String json) { // process the inbound message here } }", "from(\"direct:start\") .split().jsonpath(\"USD.store.book[*]\") .to(\"log:book\");", "from(\"direct:start\") .split().jsonpathWriteAsString(\"USD.store.book[*]\") .to(\"log:book\");", "from(\"direct:start\") .setHeader(\"numberOfBooks\") .jsonpath(\"USD..store.book.length()\", false, int.class, \"books\") .to(\"mock:result\");", "<route> <from uri=\"direct:start\"/> <setHeader name=\"numberOfBooks\"> <jsonpath headerName=\"books\" resultType=\"int\">USD..store.book.length()</jsonpath> </setHeader> <to uri=\"mock:result\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jsonpath-language-starter
probe::nfs.aop.write_end
probe::nfs.aop.write_end Name probe::nfs.aop.write_end - NFS client complete writing data Synopsis nfs.aop.write_end Values sb_flag super block flags __page the address of page page_index offset within mapping, can used a page identifier and position identifier in the page frame to end address of this write operation ino inode number i_flag file flags size write bytes dev device identifier offset start address of this write operation i_size file length in bytes Description Fires when do a write operation on nfs, often after prepare_write Update and possibly write a cached page of an NFS file.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-aop-write-end
7.6.3. Related Books
7.6.3. Related Books The Concise Guide to XFree86 for Linux by Aron Hsiao; Que - Provides an expert's view of the operation of XFree86 on Linux systems. The New XFree86 by Bill Ball; Prima Publishing - Discuses XFree86 and its relationship with the popular desktop environments, such as GNOME and KDE. Beginning GTK+ and GNOME by Peter Wright; Wrox Press, Inc. - Introduces programmers to the GNOME architecture, showing them how to get started with GTK+. GTK+/GNOME Application Development by Havoc Pennington; New Riders Publishing - An advanced look into the heart of GTK+ programming, focusing on sample code and a thorough look at the available APIs. KDE 2.0 Development by David Sweet and Matthias Ettrich; Sams Publishing - Instructs beginning and advanced developers on taking advantage of the many environment guidelines required to built QT applications for KDE.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-x-related-books
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/planning_your_automation_jobs_using_the_automation_savings_planner/making-open-source-more-inclusive
Support
Support OpenShift Container Platform 4.12 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/support/index
A.6. Virtualization Logs
A.6. Virtualization Logs The following methods can be used to access logged data about events on your hypervisor and your guests. This can be helpful when troubleshooting virtualization on your system. Each guest has a log, saved in the /var/log/libvirt/qemu/ directory. The logs are named GuestName .log and are periodically compressed when a size limit is reached. To view libvirt events in the systemd Journal , use the following command: # journalctl _SYSTEMD_UNIT=libvirtd.service The auvirt command displays audit results related to guests on your hypervisor. The displayed data can be narrowed down by selecting specific guests, time frame, and information format. For example, the following command provides a summary of events on the testguest virtual machine on the current day. You can also configure auvirt information to be automatically included in the systemd Journal. To do so, edit the /etc/libvirt/libvirtd.conf file and set the value of the audit_logging parameter to 1 . For more information, see the auvirt man page. If you encounter any errors with the Virtual Machine Manager, you can review the generated data in the virt-manager.log file in the USDHOME/.virt-manager/ directory. For audit logs of the hypervisor system, see the /var/log/audit/audit.log file. Depending on the guest operating system, various system log files may also be saved on the guest. For more information about logging in Red Hat Enterprise Linux, see the System Administrator's Guide .
[ "auvirt --start today --vm testguest --summary Range of time for report: Mon Sep 4 16:44 - Mon Sep 4 17:04 Number of guest starts: 2 Number of guest stops: 1 Number of resource assignments: 14 Number of related AVCs: 0 Number of related anomalies: 0 Number of host shutdowns: 0 Number of failed operations: 0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Troubleshooting-Virtualization_log_files
B.4. Specifying Directory Entries Using LDIF
B.4. Specifying Directory Entries Using LDIF Many types of entries can be stored in the directory. This section concentrates on three of the most common types of entries used in a directory: domain, organizational unit, and organizational person entries. The object classes defined for an entry are what indicate whether the entry represents a domain or domain component, an organizational unit, an organizational person, or some other type of entry. For a complete list of the object classes that can be used by default in the directory and a list of the most commonly used attributes, see the Red Hat Directory Server 11 Configuration, Command, and File Reference . B.4.1. Specifying Domain Entries Directories often have at least one domain entry. Typically this is the first, or topmost, entry in the directory. The domain entry often corresponds to the DNS host and domain name for your directory. For example, if the Directory Server host is called ldap.example.com , then the domain entry for the directory is probably named dc=ldap,dc=example,dc=com or simply dc=example,dc=com . The LDIF entry used to define a domain appears as follows: The following is a sample domain entry in LDIF format: Each element of the LDIF-formatted domain entry is defined in Table B.2, "LDIF Elements in Domain Entries" . Table B.2. LDIF Elements in Domain Entries LDIF Element Description dn: distinguished_name Required. Specifies the distinguished name for the entry. objectClass: top Required. Specifies the top object class. objectClass: domain Specifies the domain object class. This line defines the entry as a domain or domain component. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes that can be used with this object class. --> dc: domain_component Attribute that specifies the domain's name. The server is typically configured during the initial setup to have a suffix or naming context in the form dc= hostname, dc= domain, dc= toplevel . For example, dc=ldap,dc=example,dc=com . The domain entry should use the leftmost dc value, such as dc: ldap . If the suffix were dc=example,dc=com , the dc value is dc: example . Do not create the entry for dn: dc=com unless the server has been configured to use that suffix. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes that can be used with this object class. B.4.2. Specifying Organizational Unit Entries Organizational unit entries are often used to represent major branch points, or subdirectories, in the directory tree. They correspond to major, reasonably static entities within the enterprise, such as a subtree that contains people or a subtree that contains groups. The organizational unit attribute that is contained in the entry may also represent a major organization within the company, such as marketing or engineering. However, this style is discouraged. Red Hat strongly encourages using a flat directory tree. There is usually more than one organizational unit, or branch point, within a directory tree. The LDIF that defines an organizational unit entry must appear as follows: The following is a sample organizational unit entry in LDIF format: Table B.3, "LDIF Elements in Organizational Unit Entries" defines each element of the LDIF-formatted organizational unit entry. Table B.3. LDIF Elements in Organizational Unit Entries LDIF Element Description dn: distinguished_name Specifies the distinguished name for the entry. A DN is required. If there is a comma in the DN, the comma must be escaped with a backslash (\), such as dn: ou=people,dc=example,dc=com . objectClass: top Required. Specifies the top object class. objectClass: organizationalUnit Specifies the organizationalUnit object class. This line defines the entry as an organizational unit . See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. ou: organizational_unit_name Attribute that specifies the organizational unit's name. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. B.4.3. Specifying Organizational Person Entries The majority of the entries in the directory represent organizational people. In LDIF, the definition of an organizational person is as follows: The following is an example organizational person entry in LDIF format: Table B.4, "LDIF Elements in Person Entries" defines each aspect of the LDIF person entry. Table B.4. LDIF Elements in Person Entries LDIF Element Description dn: distinguished_name Required. Specifies the distinguished name for the entry. For example, dn: uid=bjensen,ou=people,dc=example,dc=com . If there is a comma in the DN, the comma must be escaped with a backslash (\). objectClass: top Required. Specifies the top object class. objectClass: person Specifies the person object class. This object class specification should be included because many LDAP clients require it during search operations for a person or an organizational person. objectClass: organizationalPerson Specifies the organizationalPerson object class. This object class specification should be included because some LDAP clients require it during search operations for an organizational person. objectClass: inetOrgPerson Specifies the inetOrgPerson object class. The inetOrgPerson object class is recommended for the creation of an organizational person entry because this object class includes the widest range of attributes. The uid attribute is required by this object class, and entries that contain this object class are named based on the value of the uid attribute. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. cn: common_name Specifies the person's common name, which is the full name commonly used by the person. For example, cn: Bill Anderson . At least one common name is required. sn: surname Specifies the person's surname, or last name. For example, sn: Anderson . A surname is required. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class.
[ "dn: distinguished_name objectClass: top objectClass: domain dc: domain_component_name list_of_optional_attributes", "dn: dc=example,dc=com objectclass: top objectclass: domain dc: example description: Fictional example company", "dn: distinguished_name objectClass: top objectClass: organizationalUnit ou: organizational_unit_name list_of_optional_attributes", "dn: ou=people,dc=example,dc=com objectclass: top objectclass: organizationalUnit ou: people description: Fictional example organizational unit", "dn: distinguished_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: common_name sn: surname list_of_optional_attributes", "dn: uid=bjensen,ou=people,dc=example,dc=com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: Babs Jensen sn: Jensen givenname: Babs uid: bjensen ou: people description: Fictional example person telephoneNumber: 555-5557 userPassword: {SSHA}dkfljlk34r2kljdsfk9" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldap_data_interchange_format-specifying_directory_entries_using_ldif
Chapter 15. Deleting applications
Chapter 15. Deleting applications You can delete applications created in your project. 15.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/odc-deleting-applications
Upgrade Guide
Upgrade Guide Red Hat Virtualization 4.3 Update and upgrade tasks for Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract A comprehensive guide to upgrading and updating components in a Red Hat Virtualization environment.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/upgrade_guide/index
5.9.8. Creating RAID Arrays
5.9.8. Creating RAID Arrays In addition to supporting hardware RAID solutions, Red Hat Enterprise Linux supports software RAID. There are two ways that software RAID arrays can be created: While installing Red Hat Enterprise Linux After Red Hat Enterprise Linux has been installed The following sections review these two methods. 5.9.8.1. While Installing Red Hat Enterprise Linux During the normal Red Hat Enterprise Linux installation process, RAID arrays can be created. This is done during the disk partitioning phase of the installation. To begin, you must manually partition your disk drives using Disk Druid . You must first create a new partition of the type "software RAID." , select the disk drives that you want to be part of the RAID array in the Allowable Drives field. Continue by selecting the desired size and whether you want the partition to be a primary partition. Once you have created all the partitions required for the RAID array(s) that you want to create, you must then use the RAID button to actually create the arrays. You are then presented with a dialog box where you can select the array's mount point, file system type, RAID device name, RAID level, and the "software RAID" partitions on which this array is to be based. Once the desired arrays have been created, the installation process continues as usual. Note For more information on creating software RAID arrays during the Red Hat Enterprise Linux installation process, refer to the System Administrators Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-raid-credel
Chapter 11. OpenStack configuration test
Chapter 11. OpenStack configuration test Starting with Red Hat OpenStack Platform (RHOSP) 15, OpenStack Configuration test will be the new test available for Partners. This test is for the Manila and Neutron plugin certification test plan, and is planned only for controller node. The OpenStack Configuration test includes and supports Open Virtual Network subtest. Open Virtual Network subtest Open Virtual Network (OVN) is an Open vSwitch-based software-defined networking (SDN) solution that supplies network services to instances. OVN is expected to be configured and operational. This subtest validates the status of OVN. Success criteria The status of the test depends on the OVN as follows: If OVN is ON, the test status is PASS. If OVN is OFF, the test status is FAIL. If OVN is unknown, the test status is REVIEW. 11.1. Additional resources For more information about OVN, see OVN Architecture .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_policy_guide/assembly-openstack-configuration-test_rhosp-pol-openstack-director
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.py example. USD cd /usr/share/proton/examples/python/ USD python helloworld.py Hello World! 3.3. Running Hello World on Microsoft Windows The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Download and run the Hello World example. > curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!
[ "cd /usr/share/proton/examples/python/ python helloworld.py Hello World!", "> curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/getting_started
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/proc_providing-feedback-on-red-hat-documentation
Chapter 2. Configuring SAP HANA System Replication
Chapter 2. Configuring SAP HANA System Replication Before the HA cluster can be configured, SAP HANA System Replication must be configured and tested according to the guidelines from SAP: SAP HANA System Replication: Configuration . The following example shows how to enable SAP HANA System Replication on the nodes that will later become part of the HA cluster that will manage the SAP HANA System Replication setup. Please refer to RHEL for SAP Subscriptions and Repositories , for more information on how to ensure the correct subscription and repos are enabled on each HA cluster node. SAP HANA configuration used in the example: SID: RH1 Instance Number: 02 node1 FQDN: node1.example.com node2 FQDN: node2.example.com node1 SAP HANA site name: DC1 node2 SAP HANA site name: DC2 SAP HANA 'SYSTEM' user password: <HANA_SYSTEM_PASSWORD> SAP HANA administrative user: rh1adm 2.1. Prerequisites Ensure that both systems can resolve the FQDN of both systems without issues. To ensure that FQDNs can be resolved even without DNS you can place them into /etc/hosts like in the example below: [root]# cat /etc/hosts ... 192.168.0.11 node1.example.com node1 192.168.0.12 node2.example.com node2 Note As documented at hostname | SAP Help Portal SAP HANA only supports hostnames with lowercase characters. For the system replication to work, the SAP HANA log_mode variable must be set to normal. Please refer to SAP Note 3221437 - System replication is failed due to "Connection refused: Primary has to run in log mode normal for system replication!" , for more information. This can be verified as the SAP HANA administrative user using the command below on both nodes. [rh1adm]USD hdbsql -u system -p <HANA_SYSTEM_PASSWORD> -i 02 "select value from "SYS"."M_INIFILE_CONTENTS" where key='log_mode'" VALUE "normal" 1 row selected A lot of the configuration steps are performed by the SAP HANA administrative user for the SID that was selected during installation. For the example setup described in this document, the user id rh1adm is used for the SAP HANA administrative user, since the SID used is RH1 . To switch from the root user to the SAP HANA administrative user, you can use the following command: [root]# sudo -i -u rh1adm [rh1adm]USD 2.2. Performing an initial SAP HANA database backup SAP HANA System Replication will only work after an initial backup has been performed on the HANA instance that will be the primary instance for the SAP HANA System Replication setup. The following shows an example for creating an initial backup in /tmp/foo directory. Please note that the size of the backup depends on the database size and may take some time to complete. The directory to which the backup will be placed must be writable by the SAP HANA administrative user. On single-tenant SAP HANA setups, the following command can be used to create the initial backup: [rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> "BACKUP DATA USING FILE ('/tmp/foo')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) On multi-tenant SAP HANA setups, the SYSTEMDB and all tenant databases need to be backed up. The following example shows how to backup the SYSTEMDB : [rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB "BACKUP DATA USING FILE ('/tmp/foo')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) [rh1adm]# hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB "BACKUP DATA FOR RH1 USING FILE ('/tmp/foo-RH1')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) Please check the SAP HANA documentation on how to backup the tenant databases. 2.3. Configuring the SAP HANA primary replication instance After the initial backup has been successfully completed, initialize SAP HANA System Replication with the following command: [rh1adm]USD hdbnsutil -sr_enable --name=DC1 checking for active nameserver ... nameserver is active, proceeding ... successfully enabled system as system replication source site done. Verify that after the initialization the SAP HANA System Replication status shows the current node as 'primary': [rh1adm]#USD hdbnsutil -sr_state checking for active or inactive nameserver ... System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: primary site id: 1 site name: DC1 Host Mappings: 2.4. Configuring the SAP HANA secondary replication instance After installing the secondary SAP HANA instance on the other HA cluster node using the same SID and instance number as the SAP HANA primary instance, it needs to be registered to the already running SAP HANA primary instance. The SAP HANA instance that will become the secondary replication instance needs to be stopped first before it can be registered to the primary instance: [rh1adm]USD HDB stop When the secondary SAP HANA instance has been stopped, copy the SAP HANA system PKI SSFS_RH1.KEY and SSFS_RH1.DAT files from the primary SAP HANA instance to the secondary SAP HANA instance: [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY ... [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT ... Please refer to SAP Note 2369981 - Required configuration steps for authentication with HANA System Replication , for more information. Now the SAP HANA secondary replication instance can be registered to the SAP HANA primary replication instance with the following command: [rh1adm]USD hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=02 --replicationMode=syncmem --operationMode=logreplay --name=DC2 adding site ... checking for inactive nameserver ... nameserver node2:30201 not responding. collecting information ... updating local ini files ... done. Please choose the values for replicationMode and operationMode according to your requirements for HANA System Replication. Please refer to Replication Modes for SAP HANA System Replication and Operation Modes for SAP HANA System Replication , for more information. When the registration is successful, the SAP HANA secondary replication instance can be started again: [rh1adm]USD HDB start Verify that the secondary node is running and that 'mode' matches the value used for the replicationMode parameter in the hdbnsutil -sr_register command. If registration was successful, the SAP HANA System Replication status on the SAP HANA secondary replication instance should look similar to the following: [rh1adm]USD hdbnsutil -sr_state checking for active or inactive nameserver ... System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 2 site name: DC2 active primary site: 1 Host Mappings: ~~~~~~~~~~~~~~ node2 -> [DC1] node1 node2 -> [DC2] node2 2.5. Checking SAP HANA System Replication state To check the current state of SAP HANA System Replication, you can use the systemReplicationStatus.py Python script provided by SAP HANA as the SAP HANA administrative user on the current primary SAP HANA node. On single tenant SAP HANA setups, the output should look similar to the following: [rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 On multi-tenant SAP HANA setups, the output should look similar to the following: [rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | ----- | ----- | ------------ | --------- | ------- | --------- | ----------| --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 2.6. Testing SAP HANA System Replication The test phase is a very important phase to verify if the KPIs are met and the landscape performs the way it was configured. If the SAP HANA System Replication setup does not work as expected without the HA cluster, it can lead to unexpected behavior when the HA cluster is configured later on to manage the SAP HANA System Replication setup. Therefore, a few test cases are suggested below as guidelines, which should be enhanced by your specific requirements. The tests should be performed with realistic data loads and sizes. Test case Description Full Replication Measure how long the initial synchronization takes, from when Lost Connection Measure how long it takes until primary and secondary are Takeover Measure how long it takes for the secondary system to be fully Data Consistency Create or change data, then perform a takeover and check if the data is still available. Client Reconnect Test client access after a take-over, to check if the DNS/Virtual IP switch worked. Primary becomes secondary Measure how long it takes until both systems are in sync, when the former primary becomes the secondary after a takeover. Please refer to section "9. Testing", in How To Perform System Replication for SAP HANA , for more information.
[ "SID: RH1 Instance Number: 02 node1 FQDN: node1.example.com node2 FQDN: node2.example.com node1 SAP HANA site name: DC1 node2 SAP HANA site name: DC2 SAP HANA 'SYSTEM' user password: <HANA_SYSTEM_PASSWORD> SAP HANA administrative user: rh1adm", "cat /etc/hosts 192.168.0.11 node1.example.com node1 192.168.0.12 node2.example.com node2", "[rh1adm]USD hdbsql -u system -p <HANA_SYSTEM_PASSWORD> -i 02 \"select value from \"SYS\".\"M_INIFILE_CONTENTS\" where key='log_mode'\" VALUE \"normal\" 1 row selected", "sudo -i -u rh1adm [rh1adm]USD", "[rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> \"BACKUP DATA USING FILE ('/tmp/foo')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)", "[rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB \"BACKUP DATA USING FILE ('/tmp/foo')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB \"BACKUP DATA FOR RH1 USING FILE ('/tmp/foo-RH1')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)", "[rh1adm]USD hdbnsutil -sr_enable --name=DC1 checking for active nameserver nameserver is active, proceeding successfully enabled system as system replication source site done.", "USD hdbnsutil -sr_state checking for active or inactive nameserver System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: primary site id: 1 site name: DC1 Host Mappings:", "[rh1adm]USD HDB stop", "[rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT", "[rh1adm]USD hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=02 --replicationMode=syncmem --operationMode=logreplay --name=DC2 adding site checking for inactive nameserver nameserver node2:30201 not responding. collecting information updating local ini files done.", "[rh1adm]USD HDB start", "[rh1adm]USD hdbnsutil -sr_state checking for active or inactive nameserver System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 2 site name: DC2 active primary site: 1 Host Mappings: ~~~~~~~~~~~~~~ node2 -> [DC1] node1 node2 -> [DC2] node2", "[rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1", "[rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | ----- | ----- | ------------ | --------- | ------- | --------- | ----------| --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_configure_sap_hana_replication_automating-sap-hana-scale-up-system-replication
Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1]
Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1] Description OAuthClientAuthorization describes an authorization created by an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this authorization kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata scopes array (string) Scopes is an array of the granted scopes. userName string UserName is the user name that authorized this client userUID string UserUID is the unique UID associated with this authorization. UserUID and UserName must both match for this authorization to be valid. 4.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclientauthorizations DELETE : delete collection of OAuthClientAuthorization GET : list or watch objects of kind OAuthClientAuthorization POST : create an OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations GET : watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} DELETE : delete an OAuthClientAuthorization GET : read the specified OAuthClientAuthorization PATCH : partially update the specified OAuthClientAuthorization PUT : replace the specified OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} GET : watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/oauth.openshift.io/v1/oauthclientauthorizations HTTP method DELETE Description delete collection of OAuthClientAuthorization Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClientAuthorization Table 4.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorizationList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClientAuthorization Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 202 - Accepted OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.2. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations HTTP method GET Description watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization HTTP method DELETE Description delete an OAuthClientAuthorization Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClientAuthorization Table 4.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClientAuthorization Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClientAuthorization Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.4. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization HTTP method GET Description watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/oauth_apis/oauthclientauthorization-oauth-openshift-io-v1
Chapter 12. Enable signup
Chapter 12. Enable signup Configure developer signup by implementing self-service or manual mode. You can configure the workflow for developers to be self-service or by manual invite only. Self-service signups are done by developers through the Developer Portal, while manual invites are handled by your admins through the Admin Portal. By default, developers are enabled to sign up by themselves. If you enable developer self-service, you can require admin approval before the developer account can be activated. To do so, navigate to Audience > Accounts > Settings > Usage Rules .
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/enable-signup
3.4. Managing Users via Command-Line Tools
3.4. Managing Users via Command-Line Tools When managing users via command line, the following commands are used: useradd , usermod , userdel , or passwd . The files affected include /etc/passwd which stores user accounts information and /etc/shadow , which stores secure user account information. 3.4.1. Creating Users The useradd utility creates new users and adds them to the system. Following the short procedure below, you will create a default user account with its UID, automatically create a home directory where default user settings will be stored, /home/ username / , and set the default shell to /bin/bash . Run the following command at a shell prompt as root substituting username with the name of your choice: By setting a password unlock the account to make it accessible. Type the password twice when the program prompts you to. Example 3.1. Creating a User with Default Settings Running the useradd robert command creates an account named robert . If you run cat /etc/passwd to view the content of the /etc/passwd file, you can learn more about the new user from the line displayed to you: robert has been assigned a UID of 502, which reflects the rule that the default UID values from 0 to 499 are typically reserved for system accounts. GID, group ID of User Private Group , equals to UID. The home directory is set to /home/robert and login shell to /bin/bash . The letter x signals that shadow passwords are used and that the hashed password is stored in /etc/shadow . If you want to change the basic default setup for the user while creating the account, you can choose from a list of command-line options modifying the behavior of useradd (see the useradd (8) man page for the whole list of options). As you can see from the basic syntax of the command, you can add one or more options: As a system administrator, you can use the -c option to specify, for example, the full name of the user when creating them. Use -c followed by a string, which adds a comment to the user: Example 3.2. Specifying a User's Full Name when Creating a User A user account has been created with user name robert , sometimes called the login name, and full name Robert Smith. If you do not want to create the default /home/ username / directory for the user account, set a different one instead of it. Execute the command below: Example 3.3. Adding a User with non-default Home Directory robert 's home directory is now not the default /home/robert but /home/dir_1/ . If you do not want to create the home directory for the user at all, you can do so by running useradd with the -M option. However, when such a user logs into a system that has just booted and their home directory does not exist, their login directory will be the root directory. If such a user logs into a system using the su command, their login directory will be the current directory of the user. If you need to copy a directory content to the /home directory while creating a new user, make use of the -m and -k options together followed by the path. Example 3.4. Creating a User while Copying Contents to the Home Directory The following command copies the contents of a directory named /dir_1 to /home/jane , which is the default home directory of a new user jane : As a system administrator, you may need to create a temporary account. Using the useradd command, this means creating an account for a certain amount of time only and disabling it at a certain date. This is a particularly useful setting as there is no security risk resulting from forgetting to delete a certain account. For this, the -e option is used with the specified expire_date in the YYYY-MM-DD format. Note Do not confuse account expiration and password expiration. Account expiration is a particular date, after which it is impossible to log in to the account in any way, as the account no longer exists. Password expiration, the maximum password age and date of password creation or last password change, is the date, when it is not possible to log in using the password (but other ways exist, such as logging in using an SSH key). Example 3.5. Setting the Account Expiration Date The account emily will be created now and automatically disabled on 5 November, 2015. User's login shell defaults to /bin/bash , but can be changed by the -s option to any other shell different from bash, ksh, csh, tsh, for example. Example 3.6. Adding a User with Non-default Shell This command creates the user robert which has the /bin/ksh shell. The -r option creates a system account, which is an account for administrative use that has some, but not all, root privileges. Such accounts have a UID lower than the value of UID_MIN defined in /etc/login.defs , typically 500 and above for ordinary users.
[ "useradd username", "passwd", "~]# useradd robert ~]# passwd robert Changing password for user robert New password: Re-type new password: passwd: all authentication tokens updated successfully.", "robert:x:502:502::/home/robert:/bin/bash", "useradd [ option(s) ] username", "useradd -c \" string \" username", "~]# useradd -c \"Robert Smith\" robert ~]# cat /etc/passwd robert:x:502:502:Robert Smith:/home/robert:/bin/bash", "useradd -d home_directory", "~]# useradd -d /home/dir_1 robert", "useradd -M username", "~]# useradd -m -k /dir_1 jane", "useradd -e YYYY-MM-DD username", "~]# useradd -e 2015-11-05 emily", "useradd -s login_shell username", "~]# useradd -s /bin/ksh robert", "useradd -r username" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-users-cl-tools
Chapter 3. OpenShift Data Foundation operators
Chapter 3. OpenShift Data Foundation operators Red Hat OpenShift Data Foundation is comprised of the following three Operator Lifecycle Manager (OLM) operator bundles, deploying four operators which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated: OpenShift Data Foundation odf-operator OpenShift Container Storage ocs-operator rook-ceph-operator Multicloud Object Gateway mcg-operator Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state or approaching that state, with minimal administrator intervention. 3.1. OpenShift Data Foundation operator The odf-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators. The odf-operator has the following primary functions: Enforces the configuration and versioning of the other operators that comprise OpenShift Data Foundation. It does this by using two primary mechanisms: operator dependencies and Subscription management. The odf-operator bundle specifies dependencies on other OLM operators to make sure they are always installed at specific versions. The operator itself manages the Subscriptions for all other operators to make sure the desired versions of those operators are available for installation by the OLM. Provides the OpenShift Data Foundation external plugin for the OpenShift Console. Provides an API to integrate storage solutions with the OpenShift Console. 3.1.1. Components The odf-operator has a dependency on the ocs-operator package. It also manages the Subscription of the mcg-operator . In addition, the odf-operator bundle defines a second Deployment for the OpenShift Data Foundation external plugin for the OpenShift Console. This defines an nginx -based Pod that serves the necessary files to register and integrate OpenShift Data Foundation dashboards directly into the OpenShift Container Platform Console. 3.1.2. Design diagram This diagram illustrates how odf-operator is integrated with the OpenShift Container Platform. Figure 3.1. OpenShift Data Foundation Operator 3.1.3. Responsibilites The odf-operator defines the following CRD: StorageSystem The StorageSystem CRD represents an underlying storage system that provides data storage and services for OpenShift Container Platform. It triggers the operator to ensure the existence of a Subscription for a given Kind of storage system. 3.1.4. Resources The ocs-operator creates the following CRs in response to the spec of a given StorageSystem. Operator Lifecycle Manager Resources Creates a Subscription for the operator which defines and reconciles the given StorageSystem's Kind. 3.1.5. Limitation The odf-operator does not provide any data storage or services itself. It exists as an integration and management layer for other storage systems. 3.1.6. High availability High availability is not a primary requirement for the odf-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.1.7. Relevant config files The odf-operator comes with a ConfigMap of variables that can be used to modify the behavior of the operator. 3.1.8. Relevant log files To get an understanding of the OpenShift Data Foundation and troubleshoot issues, you can look at the following: Operator Pod logs StorageSystem status Underlying storage system CRD statuses Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageSystem status and events The StorageSystem CR stores the reconciliation details in the status of the CR and has associated events. The spec of the StorageSystem contains the name, namespace, and Kind of the actual storage system's CRD, which the administrator can use to find further information on the status of the storage system. 3.1.9. Lifecycle The odf-operator is required to be present as long as the OpenShift Data Foundation bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Data Foundation CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. The creation and deletion of StorageSystems is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate application programming interface (API) calls. 3.2. OpenShift Container Storage operator The ocs-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators and serves as a configuration gateway for the features provided by the other operators. It does not directly manage the other operators. The ocs-operator has the following primary functions: Creates Custom Resources (CRs) that trigger the other operators to reconcile against them. Abstracts the Ceph and Multicloud Object Gateway configurations and limits them to known best practices that are validated and supported by Red Hat. Creates and reconciles the resources required to deploy containerized Ceph and NooBaa according to the support policies. 3.2.1. Components The ocs-operator does not have any dependent components. However, the operator has a dependency on the existence of all the custom resource definitions (CRDs) from other operators, which are defined in the ClusterServiceVersion (CSV). 3.2.2. Design diagram This diagram illustrates how OpenShift Container Storage is integrated with the OpenShift Container Platform. Figure 3.2. OpenShift Container Storage Operator 3.2.3. Responsibilities The two ocs-operator CRDs are: OCSInitialization StorageCluster OCSInitialization is a singleton CRD used for encapsulating operations that apply at the operator level. The operator takes care of ensuring that one instance always exists. The CR triggers the following: Performs initialization tasks required for OpenShift Container Storage. If needed, these tasks can be triggered to run again by deleting the OCSInitialization CRD. Ensures that the required Security Context Constraints (SCCs) for OpenShift Container Storage are present. Manages the deployment of the Ceph toolbox Pod, used for performing advanced troubleshooting and recovery operations. The StorageCluster CRD represents the system that provides the full functionality of OpenShift Container Storage. It triggers the operator to ensure the generation and reconciliation of Rook-Ceph and NooBaa CRDs. The ocs-operator algorithmically generates the CephCluster and NooBaa CRDs based on the configuration in the StorageCluster spec. The operator also creates additional CRs, such as CephBlockPools , Routes , and so on. These resources are required for enabling different features of OpenShift Container Storage. Currently, only one StorageCluster CR per OpenShift Container Platform cluster is supported. 3.2.4. Resources The ocs-operator creates the following CRs in response to the spec of the CRDs it defines . The configuration of some of these resources can be overridden, allowing for changes to the generated spec or not creating them altogether. General resources Events Creates various events when required in response to reconciliation. Persistent Volumes (PVs) PVs are not created directly by the operator. However, the operator keeps track of all the PVs created by the Ceph CSI drivers and ensures that the PVs have appropriate annotations for the supported features. Quickstarts Deploys various Quickstart CRs for the OpenShift Container Platform Console. Rook-Ceph resources CephBlockPool Define the default Ceph block pools. CephFilesysPrometheusRulesoute for the Ceph object store. StorageClass Define the default Storage classes. For example, for CephBlockPool and CephFilesystem ). VolumeSnapshotClass Define the default volume snapshot classes for the corresponding storage classes. Multicloud Object Gateway resources NooBaa Define the default Multicloud Object Gateway system. Monitoring resources Metrics Exporter Service Metrics Exporter Service Monitor PrometheusRules 3.2.5. Limitation The ocs-operator neither deploys nor reconciles the other Pods of OpenShift Data Foundation. The ocs-operator CSV defines the top-level components such as operator Deployments and the Operator Lifecycle Manager (OLM) reconciles the specified component. 3.2.6. High availability High availability is not a primary requirement for the ocs-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.2.7. Relevant config files The ocs-operator configuration is entirely specified by the CSV and is not modifiable without a custom build of the CSV. 3.2.8. Relevant log files To get an understanding of the OpenShift Container Storage and troubleshoot issues, you can look at the following: Operator Pod logs StorageCluster status and events OCSInitialization status Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageCluster status and events The StorageCluster CR stores the reconciliation details in the status of the CR and has associated events. Status contains a section of the expected container images. It shows the container images that it expects to be present in the pods from other operators and the images that it currently detects. This helps to determine whether the OpenShift Container Storage upgrade is complete. OCSInitialization status This status shows whether the initialization tasks are completed successfully. 3.2.9. Lifecycle The ocs-operator is required to be present as long as the OpenShift Container Storage bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Container Storage CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. An OCSInitialization CR should always exist. The operator creates one if it does not exist. The creation and deletion of StorageClusters is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate API calls. 3.3. Rook-Ceph operator Rook-Ceph operator is the Rook operator for Ceph in the OpenShift Data Foundation. Rook enables Ceph storage systems to run on the OpenShift Container Platform. The Rook-Ceph operator is a simple container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy. 3.3.1. Components The Rook-Ceph operator manages a number of components as part of the OpenShift Data Foundation deployment. Ceph-CSI Driver The operator creates and updates the CSI driver, including a provisioner for each of the two drivers, RADOS block device (RBD) and Ceph filesystem (CephFS) and a volume plugin daemonset for each of the two drivers. Ceph daemons Mons The monitors (mons) provide the core metadata store for Ceph. OSDs The object storage daemons (OSDs) store the data on underlying devices. Mgr The manager (mgr) collects metrics and provides other internal functions for Ceph. RGW The RADOS Gateway (RGW) provides the S3 endpoint to the object store. MDS The metadata server (MDS) provides CephFS shared volumes. 3.3.2. Design diagram The following image illustrates how Ceph Rook integrates with OpenShift Container Platform. Figure 3.3. Rook-Ceph Operator With Ceph running in the OpenShift Container Platform cluster, OpenShift Container Platform applications can mount block devices and filesystems managed by Rook-Ceph, or can use the S3/Swift API for object storage. 3.3.3. Responsibilities The Rook-Ceph operator is a container that bootstraps and monitors the storage cluster. It performs the following functions: Automates the configuration of storage components Starts, monitors, and manages the Ceph monitor pods and Ceph OSD daemons to provide the RADOS storage cluster Initializes the pods and other artifacts to run the services to manage: CRDs for pools Object stores (S3/Swift) Filesystems Monitors the Ceph mons and OSDs to ensure that the storage remains available and healthy Deploys and manages Ceph mons placement while adjusting the mon configuration based on cluster size Watches the desired state changes requested by the API service and applies the changes Initializes the Ceph-CSI drivers that are needed for consuming the storage Automatically configures the Ceph-CSI driver to mount the storage to pods Rook-Ceph Operator architecture The Rook-Ceph operator image includes all required tools to manage the cluster. There is no change to the data path. However, the operator does not expose all Ceph configurations. Many of the Ceph features like placement groups and crush maps are hidden from the users and are provided with a better user experience in terms of physical resources, pools, volumes, filesystems, and buckets. 3.3.4. Resources Rook-Ceph operator adds owner references to all the resources it creates in the openshift-storage namespace. When the cluster is uninstalled, the owner references ensure that the resources are all cleaned up. This includes OpenShift Container Platform resources such as configmaps , secrets , services , deployments , daemonsets , and so on. The Rook-Ceph operator watches CRs to configure the settings determined by OpenShift Data Foundation, which includes CephCluster , CephObjectStore , CephFilesystem , and CephBlockPool . 3.3.5. Lifecycle Rook-Ceph operator manages the lifecycle of the following pods in the Ceph cluster: Rook operator A single pod that owns the reconcile of the cluster. RBD CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . CephFS CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . Monitors (mons) Three mon pods, each with its own deployment. Stretch clusters Contain five mon pods, one in the arbiter zone and two in each of the other two data zones. Manager (mgr) There is a single mgr pod for the cluster. Stretch clusters There are two mgr pods (starting with OpenShift Data Foundation 4.8), one in each of the two non-arbiter zones. Object storage daemons (OSDs) At least three OSDs are created initially in the cluster. More OSDs are added when the cluster is expanded. Metadata server (MDS) The CephFS metadata server has a single pod. RADOS gateway (RGW) The Ceph RGW daemon has a single pod. 3.4. MCG operator The Multicloud Object Gateway (MCG) operator is an operator for OpenShift Data Foundation along with the OpenShift Data Foundation operator and the Rook-Ceph operator. The MCG operator is available upstream as a standalone operator. The MCG operator performs the following primary functions: Controls and reconciles the Multicloud Object Gateway (MCG) component within OpenShift Data Foundation. Manages new user resources such as object bucket claims, bucket classes, and backing stores. Creates the default out-of-the-box resources. A few configurations and information are passed to the MCG operator through the OpenShift Data Foundation operator. 3.4.1. Components The MCG operator does not have sub-components. However, it consists of a reconcile loop for the different resources that are controlled by it. The MCG operator has a command-line interface (CLI) and is available as a part of OpenShift Data Foundation. It enables the creation, deletion, and querying of various resources. This CLI adds a layer of input sanitation and status validation before the configurations are applied unlike applying a YAML file directly. 3.4.2. Responsibilities and resources The MCG operator reconciles and is responsible for the custom resource definitions (CRDs) and OpenShift Container Platform entities. Backing store Namespace store Bucket class Object bucket claims (OBCs) NooBaa, pod stateful sets CRD Prometheus Rules and Service Monitoring Horizontal pod autoscaler (HPA) Backing store A resource that the customer has connected to the MCG component. This resource provides MCG the ability to save the data of the provisioned buckets on top of it. A default backing store is created as part of the deployment depending on the platform that the OpenShift Container Platform is running on. For example, when OpenShift Container Platform or OpenShift Data Foundation is deployed on Amazon Web Services (AWS), it results in a default backing store which is an AWS::S3 bucket. Similarly, for Microsoft Azure, the default backing store is a blob container and so on. The default backing stores are created using CRDs for the cloud credential operator, which comes with OpenShift Container Platform. There is no limit on the amount of the backing stores that can be added to MCG. The backing stores are used in the bucket class CRD to define the different policies of the bucket. Refer the documentation of the specific OpenShift Data Foundation version to identify the types of services or resources supported as backing stores. Namespace store Resources that are used in namespace buckets. No default is created during deployment. Bucketclass A default or initial policy for a newly provisioned bucket. The following policies are set in a bucketclass: Placement policy Indicates the backing stores to be attached to the bucket and used to write the data of the bucket. This policy is used for data buckets and for cache policies to indicate the local cache placement. There are two modes of placement policy: Spread. Strips the data across the defined backing stores Mirror. Creates a full replica on each backing store Namespace policy A policy for the namespace buckets that defines the resources that are being used for aggregation and the resource used for the write target. Cache Policy This is a policy for the bucket and sets the hub (the source of truth) and the time to live (TTL) for the cache items. A default bucket class is created during deployment and it is set with a placement policy that uses the default backing store. There is no limit to the number of bucket class that can be added. Refer to the documentation of the specific OpenShift Data Foundation version to identify the types of policies that are supported. Object bucket claims (OBCs) CRDs that enable provisioning of S3 buckets. With MCG, OBCs receive an optional bucket class to note the initial configuration of the bucket. If a bucket class is not provided, the default bucket class is used. NooBaa, pod stateful sets CRD An internal CRD that controls the different pods of the NooBaa deployment such as the DB pod, the core pod, and the endpoints. This CRD must not be changed as it is internal. This operator reconciles the following entities: DB pod SCC Role Binding and Service Account to allow SSO single sign-on between OpenShift Container Platform and NooBaa user interfaces Route for S3 access Certificates that are taken and signed by the OpenShift Container Platform and are set on the S3 route Prometheus rules and service monitoring These CRDs set up scraping points for Prometheus and alert rules that are supported by MCG. Horizontal pod autoscaler (HPA) It is Integrated with the MCG endpoints. The endpoint pods scale up and down according to CPU pressure (amount of S3 traffic). 3.4.3. High availability As an operator, the only high availability provided is that the OpenShift Container Platform reschedules a failed pod. 3.4.4. Relevant log files To troubleshoot issues with the NooBaa operator, you can look at the following: Operator pod logs, which are also available through the must-gather. Different CRDs or entities and their statuses that are available through the must-gather. 3.4.5. Lifecycle The MCG operator runs and reconciles after OpenShift Data Foundation is deployed and until it is uninstalled.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators
Chapter 4. API index
Chapter 4. API index API API group AdminPolicyBasedExternalRoute k8s.ovn.org/v1 monitoring.openshift.io/v1 Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 monitoring.openshift.io/v1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 FlowSchema flowcontrol.apiserver.k8s.io/v1beta3 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1beta3 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/api_overview/api-index
Chapter 14. IngressClass [networking.k8s.io/v1]
Chapter 14. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressClassSpec provides information about the class of an Ingress. 14.1.1. .spec Description IngressClassSpec provides information about the class of an Ingress. Type object Property Type Description controller string controller refers to the name of the controller that should handle this class. This allows for different "flavors" that are controlled by the same controller. For example, you may have different parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. "acme.io/ingress-controller". This field is immutable. parameters object IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. 14.1.2. .spec.parameters Description IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. Type object Required kind name Property Type Description apiGroup string apiGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string kind is the type of resource being referenced. name string name is the name of resource being referenced. namespace string namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster". scope string scope represents if this refers to a cluster or namespace scoped resource. This may be set to "Cluster" (default) or "Namespace". 14.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingressclasses DELETE : delete collection of IngressClass GET : list or watch objects of kind IngressClass POST : create an IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses GET : watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/ingressclasses/{name} DELETE : delete an IngressClass GET : read the specified IngressClass PATCH : partially update the specified IngressClass PUT : replace the specified IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses/{name} GET : watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 14.2.1. /apis/networking.k8s.io/v1/ingressclasses HTTP method DELETE Description delete collection of IngressClass Table 14.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind IngressClass Table 14.3. HTTP responses HTTP code Reponse body 200 - OK IngressClassList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressClass Table 14.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.5. Body parameters Parameter Type Description body IngressClass schema Table 14.6. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 202 - Accepted IngressClass schema 401 - Unauthorized Empty 14.2.2. /apis/networking.k8s.io/v1/watch/ingressclasses HTTP method GET Description watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. Table 14.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.3. /apis/networking.k8s.io/v1/ingressclasses/{name} Table 14.8. Global path parameters Parameter Type Description name string name of the IngressClass HTTP method DELETE Description delete an IngressClass Table 14.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressClass Table 14.11. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressClass Table 14.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.13. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressClass Table 14.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.15. Body parameters Parameter Type Description body IngressClass schema Table 14.16. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty 14.2.4. /apis/networking.k8s.io/v1/watch/ingressclasses/{name} Table 14.17. Global path parameters Parameter Type Description name string name of the IngressClass HTTP method GET Description watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 14.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/ingressclass-networking-k8s-io-v1
5.143. libcgroup
5.143. libcgroup 5.143.1. RHBA-2012:0867 - libcgroup bug fix update Updated libcgroup packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libcgroup packages provides tools and libraries to control and monitor control groups. Bug Fix BZ# 758493 Prior to this update, a bug in the libcgroup package caused admin_id and admin_gid to not be displayed correctly because cgroup control files were not differentiated from the 'tasks' file. This update fixes this problem by adding a check in the 'cgroup_fill_cgc()' function to see if a file is a 'tasks' file or not. All users are advised to upgrade to these updated libcgroup packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libcgroup
Chapter 17. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator
Chapter 17. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments. Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator: A VM-based installation of Ansible Tower 3.8.6, automation controller, or automation hub An Openshift instance of Ansible Tower 3.8.6 (Ansible Automation Platform 1.2) 17.1. Migration considerations If you are upgrading from Ansible Automation Platform 1.2 on OpenShift Container Platform 3 to Ansible Automation Platform 2.x on OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster. 17.2. Preparing for migration Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you need to back up your existing data, create k8s secrets for your secret key and postgresql configuration. Note If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator . 17.2.1. Migrating to Ansible Automation Platform Operator Prerequisites To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following: Secret key secret Postgresql configuration Role-based Access Control for the namespaces on the new OpenShift cluster The new OpenShift cluster must be able to connect to the PostgreSQL database Note You can store the secret key information in the inventory file before the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support through the Red Hat Customer portal. Before migrating your data from Ansible Automation Platform 2.x or earlier, you must back up your data for loss prevention. To backup your data, do the following: Procedure Log in to your current deployment project. Run setup.sh to create a backup of your current data or deployment: For on-prem deployments of version 2.x or earlier: USD ./setup.sh -b For OpenShift deployments before version 2.0 (non-operator deployments): ./setup_openshift.sh -b 17.2.2. Creating a secret key secret To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key. If you are migrating automation controller, automation hub, and Event-Driven Ansible you must have a secret key for each that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data remains encrypted and unusable after migration. Note When specifying the symmetric encryption secret key on the custom resources, note that for automation controller the field is called secret_key_name . But for automation hub and Event-Driven Ansible, the field is called db_fields_encryption_secret . Note In the Kubernetes secrets, automation controller and Event-Driven Ansible use the same stringData key ( secret_key ) but, automation hub uses a different key ( database_fields.symmetric.key ). Procedure Locate the old secret keys in the inventory file you used to deploy Ansible Automation Platform in your installation. Create a YAML file for your secret keys: --- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <replaceme-with-controller-secret> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <replaceme-with-eda-secret> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: <replace-me-withdb-fields-encryption-key> type: Opaque Note If admin_password_secret is not provided, the operator looks for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator generates a password and create a secret from it named <resourcename>-admin-password . Apply the secret key YAML to the cluster: oc apply -f <yaml-file> 17.2.3. Creating a postgresql configuration secret For migration to be successful, you must provide access to the database for your existing deployment. Procedure Create a yaml file for your postgresql configuration secret: apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: "<external ip or url resolvable by the cluster>" port: "<external port, this usually defaults to 5432>" database: "<desired database name>" username: "<username to connect as>" password: "<password to connect with>" type: Opaque Apply the postgresql configuration yaml to the cluster: oc apply -f <old-postgres-configuration.yml> 17.2.4. Verifying network connectivity To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database. Prerequisites Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory. Procedure Create a yaml file to verify the connection between your new deployment and your old deployment database: apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: ["sleep"] args: ["600"] Apply the connection checker yaml file to your new project deployment: oc project ansible-automation-platform oc apply -f connection_checker.yaml Verify that the connection checker pod is running: oc get pods Connect to a pod shell: oc rsh dbchecker After the shell session opens in the pod, verify that the new project can connect to your old project cluster: pg_isready -h <old-host-address> -p <old-port-number> -U awx Example 17.3. Migrating data to the Ansible Automation Platform Operator After you have set your secret key, postgresql credentials, verified network connectivity and installed the Ansible Automation Platform Operator, you must create a custom resource controller object before you can migrate your data. 17.3.1. Creating an AutomationController object Use the following steps to create an AutomationController custom resource object. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Controller tab. Click Create AutomationController . You can create the object through the Form view or YAML view . The following inputs are available through the Form view . Enter a name for the new deployment. In Advanced configurations : From the Secret Key list, select your secret key secret . From the Old Database Configuration Secret list, select the old postgres configuration secret . Click Create . 17.3.2. Creating an AutomationHub object Use the following steps to create an AutomationHub custom resource object. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Hub tab. Click Create AutomationHub . Enter a name for the new deployment. In Advanced configurations , select your secret key secret and postgres configuration secret . Click Create . 17.4. Post migration cleanup After data migration, delete unnecessary instance groups and unlink the old database configuration secret from the automation controller resource definition. 17.4.1. Deleting Instance Groups post migration Procedure Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration. Note Note: If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select Workloads Secrets and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field. Select Administration Instance Groups . Select all Instance Groups except controlplane and default. Click Delete . 17.4.2. Unlinking the old database configuration secret post migration Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Controller tab. Click your AutomationController object. You can then view the object through the Form view or YAML view . The following inputs are available through the YAML view . Locate the old_postgres_configuration_secret item within the spec section of the YAML contents. Delete the line that contains this item. Click Save .
[ "./setup.sh -b", "./setup_openshift.sh -b", "--- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <replaceme-with-controller-secret> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <replaceme-with-eda-secret> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: <replace-me-withdb-fields-encryption-key> type: Opaque", "apply -f <yaml-file>", "apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: \"<external ip or url resolvable by the cluster>\" port: \"<external port, this usually defaults to 5432>\" database: \"<desired database name>\" username: \"<username to connect as>\" password: \"<password to connect with>\" type: Opaque", "apply -f <old-postgres-configuration.yml>", "apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: [\"sleep\"] args: [\"600\"]", "project ansible-automation-platform apply -f connection_checker.yaml", "get pods", "rsh dbchecker", "pg_isready -h <old-host-address> -p <old-port-number> -U awx", "<old-host-address>:<old-port-number> - accepting connections" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/aap-migration
Appendix B. Fencing Policies for Red Hat Gluster Storage
Appendix B. Fencing Policies for Red Hat Gluster Storage The following fencing policies are required for Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization) deployments. They ensure that hosts are not shut down in situations where brick processes are still running, or when shutting down the host would remove the volume's ability to reach a quorum. These policies can be set in the New Cluster or Edit Cluster window in the Administration Portal when Red Hat Gluster Storage functionality is enabled. Skip fencing if gluster bricks are up Fencing is skipped if bricks are running and can be reached from other peers. Skip fencing if gluster quorum not met Fencing is skipped if bricks are running and shutting down the host will cause loss of quorum These policies are checked after all other fencing policies when determining whether a node is fenced. Additional fencing policies may be useful for your deployment. For further details about fencing, see the Red Hat Virtualization Technical Reference : https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/technical_reference/fencing .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/ref-rhgs-fencing-policies
Chapter 24. Scheduler [config.openshift.io/v1]
Chapter 24. Scheduler [config.openshift.io/v1] Description Scheduler holds cluster-wide config information to run the Kubernetes Scheduler and influence its placement decisions. The canonical name for this config is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 24.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description defaultNodeSelector string defaultNodeSelector helps set the cluster-wide default node selector to restrict pod placement to specific nodes. This is applied to the pods created in all namespaces and creates an intersection with any existing nodeSelectors already set on a pod, additionally constraining that pod's selector. For example, defaultNodeSelector: "type=user-node,region=east" would set nodeSelector field in pod spec to "type=user-node,region=east" to all pods created in all namespaces. Namespaces having project-wide node selectors won't be impacted even if this field is set. This adds an annotation section to the namespace. For example, if a new namespace is created with node-selector='type=user-node,region=east', the annotation openshift.io/node-selector: type=user-node,region=east gets added to the project. When the openshift.io/node-selector annotation is set on the project the value is used in preference to the value we are setting for defaultNodeSelector field. For instance, openshift.io/node-selector: "type=user-node,region=west" means that the default of "type=user-node,region=east" set in defaultNodeSelector would not be applied. mastersSchedulable boolean MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. policy object DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. profile string profile sets which scheduling profile should be set in order to configure scheduling decisions for new pods. Valid values are "LowNodeUtilization", "HighNodeUtilization", "NoScoring" Defaults to "LowNodeUtilization" 24.1.2. .spec.policy Description DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 24.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 24.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/schedulers DELETE : delete collection of Scheduler GET : list objects of kind Scheduler POST : create a Scheduler /apis/config.openshift.io/v1/schedulers/{name} DELETE : delete a Scheduler GET : read the specified Scheduler PATCH : partially update the specified Scheduler PUT : replace the specified Scheduler /apis/config.openshift.io/v1/schedulers/{name}/status GET : read status of the specified Scheduler PATCH : partially update status of the specified Scheduler PUT : replace status of the specified Scheduler 24.2.1. /apis/config.openshift.io/v1/schedulers Table 24.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Scheduler Table 24.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Scheduler Table 24.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.5. HTTP responses HTTP code Reponse body 200 - OK SchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a Scheduler Table 24.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.7. Body parameters Parameter Type Description body Scheduler schema Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 202 - Accepted Scheduler schema 401 - Unauthorized Empty 24.2.2. /apis/config.openshift.io/v1/schedulers/{name} Table 24.9. Global path parameters Parameter Type Description name string name of the Scheduler Table 24.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Scheduler Table 24.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 24.12. Body parameters Parameter Type Description body DeleteOptions schema Table 24.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Scheduler Table 24.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 24.15. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Scheduler Table 24.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 24.17. Body parameters Parameter Type Description body Patch schema Table 24.18. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Scheduler Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Scheduler schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty 24.2.3. /apis/config.openshift.io/v1/schedulers/{name}/status Table 24.22. Global path parameters Parameter Type Description name string name of the Scheduler Table 24.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Scheduler Table 24.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 24.25. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Scheduler Table 24.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 24.27. Body parameters Parameter Type Description body Patch schema Table 24.28. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Scheduler Table 24.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.30. Body parameters Parameter Type Description body Scheduler schema Table 24.31. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/scheduler-config-openshift-io-v1
Chapter 11. Cloning virtual machines
Chapter 11. Cloning virtual machines To quickly create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM. Cloning creates a new VM that uses its own disk image for storage, but most of the clone's configuration and stored data is identical to the source VM. This makes it possible to prepare multiple VMs optimized for a certain task without the need to optimize each VM individually. 11.1. How cloning virtual machines works Cloning a virtual machine (VM) copies the XML configuration of the source VM and its disk images, and makes adjustments to the configurations to ensure the uniqueness of the new VM. This includes changing the name of the VM and ensuring it uses the disk image clones. Nevertheless, the data stored on the clone's virtual disks is identical to the source VM. This process is faster than creating a new VM and installing it with a guest operating system, and can be used to rapidly generate VMs with a specific configuration and content. If you are planning to create multiple clones of a VM, first create a VM template that does not contain: Unique settings, such as persistent network MAC configuration, which can prevent the clones from working correctly. Sensitive data, such as SSH keys and password files. For instructions, see Creating virtual machines templates . Additional resources Cloning a virtual machine by using the command line Cloning a virtual machine by using the web console 11.2. Creating virtual machine templates To create multiple virtual machine (VM) clones that work correctly, you can remove information and configurations that are unique to a source VM, such as SSH keys or persistent network MAC configuration. This creates a VM template , which you can use to easily and safely create VM clones. You can create VM templates by using the virt-sysprep utility or you can create them manually based on your requirements. 11.2.1. Creating a virtual machine template by using virt-sysprep To create a cloning template from an existing virtual machine (VM), you can use the virt-sysprep utility. This removes certain configurations that might cause the clone to work incorrectly, such as specific network settings or system registration metadata. As a result, virt-sysprep makes creating clones of the VM more efficient, and ensures that the clones work more reliably. Prerequisites The guestfs-tools package, which contains the virt-sysprep utility, is installed on your host: The source VM intended as a template is shut down. You know where the disk image for the source VM is located, and you are the owner of the VM's disk image file. Note that disk images for VMs created in the system connection of libvirt are located in the /var/lib/libvirt/images directory and owned by the root user by default: Optional: Any important data on the source VM's disk has been backed up. If you want to preserve the source VM intact, clone it first and turn the clone into a template. Procedure Ensure you are logged in as the owner of the VM's disk image: Optional: Copy the disk image of the VM. This is used later to verify that the VM was successfully turned into a template. Use the following command, and replace /var/lib/libvirt/images/a-really-important-vm.qcow2 with the path to the disk image of the source VM. Verification To confirm that the process was successful, compare the modified disk image to the original one. The following example shows a successful creation of a template: Additional resources The OPERATIONS section in the virt-sysprep man page on your system Cloning a virtual machine by using the command line 11.2.2. Creating a virtual machine template manually To create a template from an existing virtual machine (VM), you can manually reset or unconfigure a guest VM to prepare it for cloning. Prerequisites Ensure that you know the location of the disk image for the source VM and are the owner of the VM's disk image file. Note that disk images for VMs created in the system connection of libvirt are by default located in the /var/lib/libvirt/images directory and owned by the root user: Ensure that the VM is shut down. Optional: Any important data on the VM's disk has been backed up. If you want to preserve the source VM intact, clone it first and edit the clone to create a template. Procedure Configure the VM for cloning: Install any software needed on the clone. Configure any non-unique settings for the operating system. Configure any non-unique application settings. Remove the network configuration: Remove any persistent udev rules by using the following command: Note If udev rules are not removed, the name of the first NIC might be eth1 instead of eth0 . Remove unique information from the NMConnection files in the /etc/NetworkManager/system-connections/ directory. Remove MAC address, IP address, DNS, gateway, and any other unique information or non-desired settings. Remove similar unique information and non-desired settings from the /etc/hosts and /etc/resolv.conf files. Remove registration details: For VMs registered on the Red Hat Network (RHN): For VMs registered with Red Hat Subscription Manager (RHSM): If you do not plan to use the original VM: If you plan to use the original VM: Note The original RHSM profile remains in the Portal along with your ID code. Use the following command to reactivate your RHSM registration on the VM after it is cloned: Remove other unique details: Remove SSH public and private key pairs: Remove the configuration of LVM devices: Remove any other application-specific identifiers or configurations that might cause conflicts if running on multiple machines. Remove the gnome-initial-setup-done file to configure the VM to run the configuration wizard on the boot: Note The wizard that runs on the boot depends on the configurations that have been removed from the VM. In addition, on the first boot of the clone, it is recommended that you change the hostname. 11.3. Cloning a virtual machine by using the command line For testing, to create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM by using CLI. Prerequisites The source VM is shut down. Ensure that there is sufficient disk space to store the cloned disk images. Optional: When creating multiple VM clones, remove unique data and settings from the source VM to ensure the cloned VMs work properly. For instructions, see Creating virtual machine templates . Procedure Use the virt-clone utility with options that are appropriate for your environment and use case. Sample use cases The following command clones a local VM named example-VM-1 and creates the example-VM-1-clone VM. It also creates and allocates the example-VM-1-clone.qcow2 disk image in the same location as the disk image of the original VM, and with the same data: The following command clones a VM named example-VM-2 , and creates a local VM named example-VM-3 , which uses only two out of multiple disks of example-VM-2 : To clone your VM to a different host, migrate the VM without undefining it on the local host. For example, the following commands clone the previously created example-VM-3 VM to the 192.0.2.1 remote system, including its local disks. Note that you require root privileges to run these commands for 192.0.2.1 : Verification To verify the VM has been successfully cloned and is working correctly: Confirm the clone has been added to the list of VMs on your host: Start the clone and observe if it boots up: Additional resources virt-clone (1) man page on your system Migrating virtual machines 11.4. Cloning a virtual machine by using the web console To create new virtual machines (VMs) with a specific set of properties, you can clone a VM that you had previously configured by using the web console. Note Cloning a VM also clones the disks associated with that VM. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Ensure that the VM you want to clone is shut down. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface of the web console, click the Menu button ... of the VM that you want to clone. A drop down menu appears with controls for various VM operations. Click Clone . The Create a clone VM dialog appears. Optional: Enter a new name for the VM clone. Click Clone . A new VM is created based on the source VM. Verification Confirm whether the cloned VM appears in the list of VMs available on your host.
[ "dnf install guestfs-tools", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "whoami root", "cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-important-vm-original.qcow2", "virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2 [ 0.0] Examining the guest [ 7.3] Performing \"abrt-data\" [ 7.3] Performing \"backup-files\" [ 9.6] Performing \"bash-history\" [ 9.6] Performing \"blkid-tab\" [...]", "virt-diff -a /var/lib/libvirt/images/a-really-important-vm-orig.qcow2 -A /var/lib/libvirt/images/a-really-important-vm.qcow2 - - 0644 1001 /etc/group- - - 0000 797 /etc/gshadow- = - 0444 33 /etc/machine-id [...] - - 0600 409 /home/username/.bash_history - d 0700 6 /home/username/.ssh - - 0600 868 /root/.bash_history [...]", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "rm -f /etc/udev/rules.d/70-persistent-net.rules", "*ID=ExampleNetwork BOOTPROTO=\"dhcp\" HWADDR=\"AA:BB:CC:DD:EE:FF\" <- REMOVE NM_CONTROLLED=\"yes\" ONBOOT=\"yes\" TYPE=\"Ethernet\" UUID=\"954bd22c-f96c-4b59-9445-b39dd86ac8ab\" <- REMOVE", "rm /etc/sysconfig/rhn/systemid", "subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager clean", "subscription-manager clean", "subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9", "rm -rf /etc/ssh/ssh_host_example", "rm /etc/lvm/devices/system.devices", "rm ~/.config/gnome-initial-setup-done", "virt-clone --original example-VM-1 --auto-clone Allocating 'example-VM-1-clone.qcow2' | 50.0 GB 00:05:37 Clone 'example-VM-1-clone' created successfully.", "virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/ disk-1-example-VM-2 .qcow2 --file /var/lib/libvirt/images/ disk-2-example-VM-2 .qcow2 Allocating 'disk-1-example-VM-2-clone.qcow2' | 78.0 GB 00:05:37 Allocating 'disk-2-example-VM-2-clone.qcow2' | 80.0 GB 00:05:37 Clone 'example-VM-3' created successfully.", "virsh migrate --offline --persistent example-VM-3 qemu+ssh://[email protected]/system [email protected]'s password: scp /var/lib/libvirt/images/ <disk-1-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/ scp /var/lib/libvirt/images/ <disk-2-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/", "virsh list --all Id Name State --------------------------------------- - example-VM-1 shut off - example-VM-1-clone shut off", "virsh start example-VM-1-clone Domain 'example-VM-1-clone' started" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/cloning-virtual-machines_configuring-and-managing-virtualization
Network Observability
Network Observability OpenShift Container Platform 4.17 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress", "apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>", "oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1", "oc apply -f migrate-flowcollector-v1alpha1.yaml", "oc edit crd flowcollectors.flows.netobserv.io", "oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "oc get flowcollector/cluster", "NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready", "oc get pods -n netobserv", "NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m", "oc get pods -n netobserv-privileged", "NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h", "oc describe flowcollector/cluster", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address", "oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - \"NetworkEvents\"", "<Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress>", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: network-policy-events namespace: netobserv spec: metricName: network_policy_events_total type: Counter labels: [NetworkEvents>Type, NetworkEvents>Namespace, NetworkEvents>Name, NetworkEvents>Action, NetworkEvents>Direction] 1 filters: - field: NetworkEvents>Feature value: acl flatten: [NetworkEvents] 2 remap: 3 \"NetworkEvents>Type\": type \"NetworkEvents>Namespace\": namespace \"NetworkEvents>Name\": name \"NetworkEvents>Direction\": direction", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1", "oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4", "curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64", "chmod +x ./oc-netobserv-amd64", "sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv", "oc netobserv version", "Netobserv CLI version <version>", "oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }", "sqlite3 ./output/flow/<capture_date_time>.db", "sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;", "12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli", "oc netobserv cleanup", "oc netobserv [<command>] [<feature_option>] [<command_options>] 1", "oc netobserv flows [<feature_option>] [<command_options>]", "oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv packets [<option>]", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv metrics [<option>]", "oc netobserv metrics --enable_pkt_drop --protocol=TCP", "oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather", "oc -n netobserv get flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false", "oc edit console.operator.openshift.io cluster", "spec: plugins: - netobserv-plugin", "oc -n netobserv edit flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true", "oc get pods -n openshift-console -l app=console", "oc delete pods -n openshift-console -l app=console", "oc get pods -n netobserv -l app=netobserv-plugin", "NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s", "oc logs -n netobserv -l app=netobserv-plugin", "time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server", "oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer", "oc edit -n netobserv flowcollector.yaml -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1", "oc edit subscription netobserv-operator -n openshift-netobserv-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/network_observability/index
Chapter 14. Upgrading Red Hat Ceph Storage 6 to 7
Chapter 14. Upgrading Red Hat Ceph Storage 6 to 7 You can upgrade the Red Hat Ceph Storage cluster from Release 6 to 7 after all other upgrade tasks are completed. Prerequisites The upgrade from Red Hat OpenStack Platform 16.2 to 17.1 is complete. All Controller nodes are upgraded to Red Hat Enterprise Linux 9. In HCI environments, all Compute nodes must also be upgraded to RHEL 9. The current Red Hat Ceph Storage 6 cluster is healthy. 14.1. Director-deployed Red Hat Ceph Storage environments Perform the following tasks if Red Hat Ceph Storage is director-deployed in your environment. 14.1.1. Updating the cephadm client Before you upgrade the Red Hat Ceph Storage cluster, you must update the cephadm package in the overcloud nodes to Release 7. Prerequisites Confirm that the health status of the Red Hat Ceph Storage cluster is HEALTH_OK . Log in to a Controller node and use the command sudo cephadm shell - ceph -s to confirm the cluster health. If the status is not HEALTH_OK , correct any issues before continuing with this procedure. Procedure Create a playbook to enable the Red Hat Ceph Storage (tool only) repositories in the Controller nodes. It should contain the following information: Run the playbook: ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml <playbook_file_name> --limit <controller_role> Replace <stack> with the name of your stack. Replace <playbook_file_name > with the name of the playbook created in the step. Replace <controller_role> with the role applied to Controller nodes. Use the --limit option to apply the content to Controller nodes only. Log in to a Controller node. Verify that the cephadm package is updated to Release 7: USD sudo dnf info cephadm | grep -i version 14.1.2. Updating the Red Hat Ceph Storage container image The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment. Procedure Locate your container preparation file. The default name of this file is containers-prepare-parameter.yaml . Edit the container preparation file. Locate the ceph_tag parameter. The current entry should be similar to the following example: Update the ceph_tag parameter for Red Hat Ceph Storage 7: Edit the containers-image-prepare.yaml file and replace the Red Hat Ceph monitoring stack container related parameters with the following content: Save the file. 14.1.3. Running the container image prepare Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image. Note If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the container preparation command: USD openstack tripleo container image prepare -e <container_preparation_file> Replace <container_preparation_file> with the name of your file. The default file is containers-prepare-parameter.yaml . Verify that the new Red Hat Ceph Storage image is present in the undercloud registry: USD openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}' If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry: USD openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print USD2}' 14.1.4. Configuring Ceph Manager with Red Hat Ceph Storage 7 monitoring stack images Procedure Log in to a Controller node. List the current images from the Ceph Manager configuration: The following is an example of the command output: Update the Ceph Manager configuration for the monitoring stack services to use Red Hat Ceph Storage 7 images: Replace <alertmanager_image> with the new alertmanager image. Replace <grafana_image> with the new grafana image. Replace <node_exporter_image> with the new node exporter image. Replace <prometheus_image> with the new prometheus image. The following is an example of the alert manager update command: Verify that the new image references are updated in the Red Hat Ceph Storage cluster: 14.1.5. Upgrading to Red Hat Ceph Storage 7 with Orchestrator Upgrade to Red Hat Ceph Storage 7 by using the Orchestrator capabilities of the cephadm command. Prerequisities On a Monitor or Controller node that is running the ceph-mon service, confirm the Red Hat Ceph Storage cluster status by using the sudo cephadm --shell ceph status command . This command returns one of three responses: HEALTH_OK - The cluster is healthy. Proceed with the cluster upgrade. HEALTH_WARN - The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide . HEALTH_ERR - The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide . Procedure Log in to a Controller node. Upgrade the cluster to the latest Red Hat Ceph Storage version by using Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 7 Upgrade Guide . Important Steps 1 to 4 of the upgrade procedure presented in Upgrading the Red Hat Ceph Storage cluster are not necessary when upgrading a Red Hat Ceph Storage cluster deployed with Red Hat OpenStack by director. Start at step 5 of the procedure. Wait until the Red Hat Ceph Storage container upgrade completes. Note Monitor the status upgrade by using the command sudo cephadm shell - ceph orch upgrade status . 14.1.6. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7 When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7. Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image. Note This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments. Procedure Log in to a Controller node. Inspect the ceph-nfs service: USD sudo pcs status | grep ceph-nfs Inspect the ceph-nfs systemd unit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag: Create a file called /home/stack/ganesha_update_extravars.yaml with the following content: Replace <ceph_image_name> with the name of the Red Hat Ceph Storage container image. Replace <ceph_image_namespace> with the name of the Red Hat Ceph Storage container namespace. Replace <ceph_image_tag> with the name of the Red Hat Ceph Storage container tag. For example, in a typical environment, this content would have the following values: Save the file. Run the ceph-update-ganesha.yml playbook and provide the ganesha_update_extravars.yaml playbook for additional command parameters: Replace <stack> with the name of the overcloud stack. Verify that the ceph-nfs service is running: USD sudo pcs status | grep ceph-nfs Verify that the ceph-nfs systemd unit contains the Red Hat Ceph Storage 6 container image and tag: USD cat /etc/systemd/system/[email protected] | grep rhceph 14.2. External Red Hat Ceph Storage cluster environment Perform the following tasks if your Red Hat Ceph Storage cluster is external to your Red Hat OpenStack Platform deployment in your environment. 14.2.1. Updating the Red Hat Ceph Storage container image The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment. Procedure Locate your container preparation file. The default name of this file is containers-prepare-parameter.yaml . Edit the container preparation file. Locate the ceph_tag parameter. The current entry should be similar to the following example: Update the ceph_tag parameter for Red Hat Ceph Storage 7: Edit the containers-image-prepare.yaml file and replace the Red Hat Ceph monitoring stack container related parameters with the following content: Save the file. 14.2.2. Running the container image prepare Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image. Note If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the container preparation command: USD openstack tripleo container image prepare -e <container_preparation_file> Replace <container_preparation_file> with the name of your file. The default file is containers-prepare-parameter.yaml . Verify that the new Red Hat Ceph Storage image is present in the undercloud registry: USD openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}' If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry: USD openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print USD2}' 14.2.3. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7 When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7. Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image. Note This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments. Procedure Log in to a Controller node. Inspect the ceph-nfs service: USD sudo pcs status | grep ceph-nfs Inspect the ceph-nfs systemd unit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag: Create a file called /home/stack/ganesha_update_extravars.yaml with the following content: Replace <ceph_image_name> with the name of the Red Hat Ceph Storage container image. Replace <ceph_image_namespace> with the name of the Red Hat Ceph Storage container namespace. Replace <ceph_image_tag> with the name of the Red Hat Ceph Storage container tag. For example, in a typical environment, this content would have the following values: Save the file. Run the ceph-update-ganesha.yml playbook and provide the ganesha_update_extravars.yaml playbook for additional command parameters: Replace <stack> with the name of the overcloud stack. Verify that the ceph-nfs service is running: USD sudo pcs status | grep ceph-nfs Verify that the ceph-nfs systemd unit contains the Red Hat Ceph Storage 6 container image and tag: USD cat /etc/systemd/system/[email protected] | grep rhceph
[ "- hosts: all gather_facts: false tasks: - name: Enable RHCS 7 tools repo ansible.builtin.command: | subscription-manager repos --disable=rhceph-6-tools-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms become: true - name: Update cephadm ansible.builtin.package: name: cephadm state: latest become: true", "ceph_namespace: registry.redhat.io ceph_image: rhceph-6-rhel9 ceph_tag: '6'", "ceph_namespace: registry.redhat.io ceph_image: rhceph-7-rhel9 ceph_tag: '7'", "ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.15 ceph_grafana_image: grafana-rhel9 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.15 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.15", "source ~/stackrc", "sudo cephadm shell -- ceph config dump | grep image", "global basic container_image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:6-311 * mgr advanced mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.12 * mgr advanced mgr/cephadm/container_image_base undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph mgr advanced mgr/cephadm/container_image_grafana undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest * mgr advanced mgr/cephadm/container_image_node_exporter undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.12 * mgr advanced mgr/cephadm/container_image_prometheus undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.12", "sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager <alertmanager_image> sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_grafana <grafana_image> sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_node_exporter <node_exporter_image> sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_prometheus <prometheus_image>", "sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.15", "sudo cephadm shell -- ceph config dump | grep image", "cat /etc/systemd/system/[email protected] | grep -i container_image", "tripleo_cephadm_container_image: <ceph_image_name> tripleo_cephadm_container_ns: <ceph_image_namespace> tripleo_cephadm_container_tag: <ceph_image_tag>", "tripleo_cephadm_container_image: rhceph-7-rhel9 tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787 tripleo_cephadm_container_tag: '7'", "ansible-playbook -i USDHOME/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml -e @USDHOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml -e @USDHOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml -e @USDHOME/ganesha_update_extravars.yaml", "ceph_namespace: registry.redhat.io ceph_image: rhceph-6-rhel9 ceph_tag: '6'", "ceph_namespace: registry.redhat.io ceph_image: rhceph-7-rhel9 ceph_tag: '7'", "ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.15 ceph_grafana_image: grafana-rhel9 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.15 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.15", "source ~/stackrc", "cat /etc/systemd/system/[email protected] | grep -i container_image", "tripleo_cephadm_container_image: <ceph_image_name> tripleo_cephadm_container_ns: <ceph_image_namespace> tripleo_cephadm_container_tag: <ceph_image_tag>", "tripleo_cephadm_container_image: rhceph-7-rhel9 tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787 tripleo_cephadm_container_tag: '7'", "ansible-playbook -i USDHOME/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml -e @USDHOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml -e @USDHOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml -e @USDHOME/ganesha_update_extravars.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/assembly_ceph-6-to-7_upgrade_post-upgrade-external-ceph
Chapter 8. Red Hat Enterprise Linux CoreOS (RHCOS)
Chapter 8. Red Hat Enterprise Linux CoreOS (RHCOS) 8.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.17 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.17: If you install your cluster on infrastructure that the installation program provisions, RHCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the RHCOS configuration, are also downloaded and used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 8.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. CRI-O can use either the runC or crun container runtime to start and manage containers. For information about how to enable crun, see the documentation for creating a ContainerRuntimeConfig CR. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd . Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 8.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer preconfigured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 8.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line identify the type of deployment and the location of the Ignition-enabled initial RAM disk (initramfs). 8.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially configured machine. If a machine setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. 8.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is ready to join the cluster and does not require a reboot. 8.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 8.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster.
[ "openshift-install create ignition-configs --dir USDHOME/testconfig", "cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },", "echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode", "This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service", "\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",", "USD oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m", "oc describe machineconfigs 01-worker-container-runtime | grep Path:", "Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/architecture/architecture-rhcos
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Alibaba Virtual Private Cloud (VPC) on Alibaba Cloud Services. The installation program provisions the required infrastructure, which can then be customized. To customize the VPC installation, modify the parameters in the 'install-config.yaml' file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 6.2. Using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in the Alibaba Cloud Platform. By deploying OpenShift Container Platform into an existing Alibaba VPC, you can avoid limit constraints in new accounts and more easily adhere to your organization's operational constraints. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking using vSwitches. 6.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The vSwitches must be within the machine network. The installation program does not create the following components: VPC vSwitches Route table NAT gateway Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.2.2. VPC validation To ensure that the vSwitches you provide are suitable, the installation program confirms the following data: All the vSwitches that you specify must exist. You have provided one or more vSwitches for control plane machines and compute machines. The vSwitches' CIDRs belong to the machine CIDR that you specified. 6.2.3. Division of permissions Some individuals can create different resources in your cloud than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components, such as VPCs or vSwitches. 6.2.4. Isolation between clusters If you deploy OpenShift Container Platform into an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Alibaba Cloud 6.5.2. Sample customized install-config.yaml file for Alibaba Cloud You can customize the installation configuration file ( install-config.yaml ) to specify more details about your cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' 8 sshKey: | ssh-rsa AAAA... 9 1 Required. The installation program prompts you for a cluster name. 2 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 3 Optional. Specify parameters for machine pools that do not define their own platform configuration. 4 Required. The installation program prompts you for the region to deploy the cluster to. 5 Optional. Specify an existing resource group where the cluster should be installed. 8 Required. The installation program prompts you for the pull secret. 9 Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. 6 7 Optional. These are example vswitchID values. 6.5.3. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 6.5.4. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.5.5. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ 1 --region=<alibaba_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> 4 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the Alibaba Cloud region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Specify the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the previously generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 6.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 6.9. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 6.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console 6.11. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4", "2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml", "cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/installing-alibaba-vpc
Chapter 5. Default resource requirements for Red Hat Advanced Cluster Security Cloud Service
Chapter 5. Default resource requirements for Red Hat Advanced Cluster Security Cloud Service 5.1. General requirements for RHACS Cloud Service Before you can install Red Hat Advanced Cluster Security Cloud Service, your system must meet several requirements. Warning You must not install RHACS Cloud Service on: Amazon Elastic File System (Amazon EFS). Use the Amazon Elastic Block Store (Amazon EBS) with the default gp2 volume type instead. Older CPUs that do not have the Streaming SIMD Extensions (SSE) 4.2 instruction set. For example, Intel processors older than Sandy Bridge and AMD processors older than Bulldozer . These processors were released in 2011. To install RHACS Cloud Service, you must have one of the following systems: OpenShift Container Platform version 4.12 or later, and cluster nodes with a supported operating system of Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) A supported managed Kubernetes platform, and cluster nodes with a supported operating system of Amazon Linux, CentOS, Container-Optimized OS from Google, Red Hat Enterprise Linux CoreOS (RHCOS), Debian, Red Hat Enterprise Linux (RHEL), or Ubuntu For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . The following minimum requirements and suggestions apply to cluster nodes. Architecture Supported architectures are amd64 , ppc64le , or s390x . Note Secured cluster services are supported on IBM Power ( ppc64le ), IBM Z ( s390x ), and IBM(R) LinuxONE ( s390x ) clusters. Processor 3 CPU cores are required. Memory 6 GiB of RAM is required. Note See the default memory and CPU requirements for each component and ensure that the node size can support them. Storage For RHACS Cloud Service, a persistent volume claim (PVC) is not required. However, a PVC is strongly recommended if you have secured clusters with Scanner V4 enabled. Use Solid-State Drives (SSDs) for best performance. However, you can use another storage type if you do not have SSDs available. Important You must not use Ceph FS storage with RHACS Cloud Service. Red Hat recommends using RBD block mode PVCs for RHACS Cloud Service. If you plan to install RHACS Cloud Service by using Helm charts, you must meet the following requirements: You must have Helm command-line interface (CLI) v3.2 or newer, if you are installing or configuring RHACS Cloud Service using Helm charts. Use the helm version command to verify the version of Helm you have installed. You must have access to the Red Hat Container Registry. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . 5.2. Secured cluster services Secured cluster services contain the following components: Sensor Admission controller Collector Scanner (optional) Scanner V4 (optional) If you use a web proxy or firewall, you must ensure that secured clusters and Central can communicate on HTTPS port 443. 5.2.1. Sensor Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with the other Red Hat Advanced Cluster Security for Kubernetes components. CPU and memory requirements The following table lists the minimum CPU and memory values required to install and run sensor on secured clusters. Sensor CPU Memory Request 2 cores 4 GiB Limit 4 cores 8 GiB 5.2.2. Admission controller The Admission controller prevents users from creating workloads that violate policies you configure. CPU and memory requirements By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica. Admission controller CPU Memory Request 0.05 cores 100 MiB Limit 0.5 cores 500 MiB 5.2.3. Collector Collector monitors runtime activity on each node in your secured clusters as a DaemonSet. It connects to Sensor to report this information. The collector pod has three containers. The first container is collector, which monitors and reports the runtime activity on the node. The other two are compliance and node-inventory. Collection requirements To use the CORE_BPF collection method, the base kernel must support BTF, and the BTF file must be available to collector. In general, the kernel version must be later than 5.8 (4.18 for RHEL nodes) and the CONFIG_DEBUG_INFO_BTF configuration option must be set. Collector looks for the BTF file in the standard locations shown in the following list: Example 5.1. BTF file locations /sys/kernel/btf/vmlinux /boot/vmlinux-<kernel-version> /lib/modules/<kernel-version>/vmlinux-<kernel-version> /lib/modules/<kernel-version>/build/vmlinux /usr/lib/modules/<kernel-version>/kernel/vmlinux /usr/lib/debug/boot/vmlinux-<kernel-version> /usr/lib/debug/boot/vmlinux-<kernel-version>.debug /usr/lib/debug/lib/modules/<kernel-version>/vmlinux If any of these files exists, it is likely that the kernel has BTF support and CORE_BPF is configurable. CPU and memory requirements By default, the collector pod runs 3 containers. The following tables list the request and limits for each container and the total for each collector pod. Collector container Type CPU Memory Request 0.06 cores 320 MiB Limit 0.9 cores 1000 MiB Compliance container Type CPU Memory Request 0.01 cores 10 MiB Limit 1 core 2000 MiB Node-inventory container Type CPU Memory Request 0.01 cores 10 MiB Limit 1 core 500 MiB Total collector pod requirements Type CPU Memory Request 0.07 cores 340 MiB Limit 2.75 cores 3500 MiB 5.2.4. Scanner CPU and memory requirements The requirements in this table are based on the default of 3 replicas. StackRox Scanner CPU Memory Request 3 cores 4500 MiB Limit 6 cores 12 GiB The StackRox Scanner requires Scanner DB (PostgreSQL 15) to store data. The following table lists the minimum memory and storage values required to install and run Scanner DB. Scanner DB CPU Memory Request 0.2 cores 512 MiB Limit 2 cores 4 GiB 5.2.5. Scanner V4 Scanner V4 is optional. If Scanner V4 is installed on secured clusters, the following requirements apply. CPU, memory, and storage requirements Scanner V4 Indexer The requirements in this table are based on the default of 2 replicas. Scanner V4 Indexer CPU Memory Request 2 cores 3000 MiB Limit 4 cores 6 GiB Scanner V4 DB Scanner V4 requires Scanner V4 DB (PostgreSQL 15) to store data. The following table lists the minimum CPU, memory, and storage values required to install and run Scanner V4 DB. For Scanner V4 DB, a PVC is not required, but it is strongly recommended because it ensures optimal performance. Scanner V4 DB CPU Memory Storage Request 0.2 cores 2 GiB 10 GiB Limit 2 cores 4 GiB 10 GiB
[ "/sys/kernel/btf/vmlinux /boot/vmlinux-<kernel-version> /lib/modules/<kernel-version>/vmlinux-<kernel-version> /lib/modules/<kernel-version>/build/vmlinux /usr/lib/modules/<kernel-version>/kernel/vmlinux /usr/lib/debug/boot/vmlinux-<kernel-version> /usr/lib/debug/boot/vmlinux-<kernel-version>.debug /usr/lib/debug/lib/modules/<kernel-version>/vmlinux" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/acscs-default-requirements
Chapter 1. Understanding authentication at runtime
Chapter 1. Understanding authentication at runtime When building images, you might need to define authentication in the following scenarios: Authenticating to a container registry Pulling source code from Git The authentication is done through the definition of secrets in which the required sensitive data is stored. 1.1. Build secret annotation You can add an annotation build.shipwright.io/referenced.secret: "true" to a build secret. Based on this annotation, the build controller takes a reconcile action when an event, such as create, update, or delete triggers for the build secret. The following example shows the usage of an annotation with a secret: apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: annotations: build.shipwright.io/referenced.secret: "true" 2 name: secret-docker type: kubernetes.io/dockerconfigjson 1 Base64-encoded pull secret. 2 The value of the build.shipwright.io/referenced.secret annotation is set to true . This annotation filters secrets which are not referenced in a build instance. For example, if a secret does not have this annotation, the build controller does not reconcile even if the event is triggered for the secret. Reconciling on triggering of events allows the build controller to re-trigger validations on the build configuration, helping you to understand if a dependency is missing. 1.2. Authentication to Git repositories You can define the following types of authentication for a Git repository: Basic authentication Secure Shell (SSH) authentication You can also configure Git secrets with both types of authentication in your Build CR. 1.2.1. Basic authentication With basic authentication, you must configure the user name and password of the Git repository. The following example shows the usage of basic authentication for Git: apiVersion: v1 kind: Secret metadata: name: secret-git-basic-auth annotations: build.shipwright.io/referenced.secret: "true" type: kubernetes.io/basic-auth 1 stringData: 2 username: <cleartext_username> password: <cleartext_password> 1 The type of the Kubernetes secret. 2 The field to store your user name and password in clear text. 1.2.2. SSH authentication With SSH authentication, you must configure the Tekton annotations to specify the hostname of the Git repository provider for use. For example, github.com for GitHub or gitlab.com for GitLab. The following example shows the usage of SSH authentication for Git: apiVersion: v1 kind: Secret metadata: name: secret-git-ssh-auth annotations: build.shipwright.io/referenced.secret: "true" type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 # Insert ssh private key, base64 encoded 1 The type of the Kubernetes secret. 2 Base64 encoding of the SSH key used to authenticate into Git. You can generate this key by using the base64 ~/.ssh/id_rsa.pub command, where ~/.ssh/id_rsa.pub denotes the default location of the key that is generally used to authenticate to Git. 1.2.3. Usage of Git secret After creating a secret in the relevant namespace, you can reference it in your Build custom resource (CR). You can configure a Git secret with both types of authentication. The following example shows the usage of a Git secret with SSH authentication type: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: [email protected]:userjohn/newtaxi.git cloneSecret: secret-git-ssh-auth The following example shows the usage of a Git secret with basic authentication type: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://gitlab.com/userjohn/newtaxi.git cloneSecret: secret-git-basic-auth 1.3. Authentication to container registries To push images to a private container registry, you must define a secret in the respective namespace and then reference it in your Build custom resource (CR). Procedure Run the following command to generate a secret: USD oc --namespace <namespace> create secret docker-registry <container_registry_secret_name> \ --docker-server=<registry_host> \ 1 --docker-username=<username> \ 2 --docker-password=<password> \ 3 --docker-email=<email_address> 1 The <registry_host> value denotes the URL in this format https://<registry_server>/<registry_host> . 2 The <username> value is the user ID. 3 The <password> value can be your container registry password or an access token. Run the following command to annotate the secret: USD oc --namespace <namespace> annotate secrets <container_registry_secret_name> build.shipwright.io/referenced.secret='true' Set the value of the spec.output.pushSecret field to the secret name in your Build CR: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build # ... output: image: <path_to_image> pushSecret: <container_registry_secret_name> 1.4. Role-based access control The release deployment YAML file includes two cluster-wide roles for using Builds objects. The following roles are installed by default: shpwright-build-aggregate-view : Grants you read access to the Builds resources, such as BuildStrategy , ClusterBuildStrategy , Build , and BuildRun . This role is aggregated to the Kubernetes view role. shipwright-build-aggregate-edit : Grants you write access to the Builds resources that are configured at namespace level. The build resources include BuildStrategy , Build , and BuildRun . Read access is granted to all ClusterBuildStrategy resources. This role is aggregated to the Kubernetes edit and admin roles. Only cluster administrators have write access to the ClusterBuildStrategy resources. You can change this setting by creating a separate Kubernetes ClusterRole role with these permissions and binding the role to appropriate users.
[ "apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: annotations: build.shipwright.io/referenced.secret: \"true\" 2 name: secret-docker type: kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: secret-git-basic-auth annotations: build.shipwright.io/referenced.secret: \"true\" type: kubernetes.io/basic-auth 1 stringData: 2 username: <cleartext_username> password: <cleartext_password>", "apiVersion: v1 kind: Secret metadata: name: secret-git-ssh-auth annotations: build.shipwright.io/referenced.secret: \"true\" type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 # Insert ssh private key, base64 encoded", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: [email protected]:userjohn/newtaxi.git cloneSecret: secret-git-ssh-auth", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://gitlab.com/userjohn/newtaxi.git cloneSecret: secret-git-basic-auth", "oc --namespace <namespace> create secret docker-registry <container_registry_secret_name> --docker-server=<registry_host> \\ 1 --docker-username=<username> \\ 2 --docker-password=<password> \\ 3 --docker-email=<email_address>", "oc --namespace <namespace> annotate secrets <container_registry_secret_name> build.shipwright.io/referenced.secret='true'", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build # output: image: <path_to_image> pushSecret: <container_registry_secret_name>" ]
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/authentication/understanding-authentication-at-runtime
Appendix D. Ceph File System client configuration reference
Appendix D. Ceph File System client configuration reference This section lists configuration options for Ceph File System (CephFS) FUSE clients. Set them in the Ceph configuration file under the [client] section. client_acl_type Description Set the ACL type. Currently, only possible value is posix_acl to enable POSIX ACL, or an empty string. This option only takes effect when the fuse_default_permissions is set to false . Type String Default "" (no ACL enforcement) client_cache_mid Description Set the client cache midpoint. The midpoint splits the least recently used lists into a hot and warm list. Type Float Default 0.75 client_cache size Description Set the number of inodes that the client keeps in the metadata cache. Type Integer Default 16384 (16 MB) client_caps_release_delay Description Set the delay between capability releases in seconds. The delay sets how many seconds a client waits to release capabilities that it no longer needs in case the capabilities are needed for another user space operation. Type Integer Default 5 (seconds) client_debug_force_sync_read Description If set to true , clients read data directly from OSDs instead of using a local page cache. Type Boolean Default false client_dirsize_rbytes Description If set to true , use the recursive size of a directory (that is, total of all descendants). Type Boolean Default true client_max_inline_size Description Set the maximum size of inlined data stored in a file inode rather than in a separate data object in RADOS. This setting only applies if the inline_data flag is set on the MDS map. Type Integer Default 4096 client_metadata Description Comma-delimited strings for client metadata sent to each MDS, in addition to the automatically generated version, host name, and other metadata. Type String Default "" (no additional metadata) client_mount_gid Description Set the group ID of CephFS mount. Type Integer Default -1 client_mount_timeout Description Set the timeout for CephFS mount in seconds. Type Float Default 300.0 client_mount_uid Description Set the user ID of CephFS mount. Type Integer Default -1 client_mountpoint Description An alternative to the -r option of the ceph-fuse command. Type String Default / client_oc Description Enable object caching. Type Boolean Default true client_oc_max_dirty Description Set the maximum number of dirty bytes in the object cache. Type Integer Default 104857600 (100MB) client_oc_max_dirty_age Description Set the maximum age in seconds of dirty data in the object cache before writeback. Type Float Default 5.0 (seconds) client_oc_max_objects Description Set the maximum number of objects in the object cache. Type Integer Default 1000 client_oc_size Description Set how many bytes of data will the client cache. Type Integer Default 209715200 (200 MB) client_oc_target_dirty Description Set the target size of dirty data. Red Hat recommends keeping this number low. Type Integer Default 8388608 (8MB) client_permissions Description Check client permissions on all I/O operations. Type Boolean Default true client_quota_df Description Report root directory quota for the statfs operation. Type Boolean Default true client_readahead_max_bytes Description Set the maximum number of bytes that the kernel reads ahead for future read operations. Overridden by the client_readahead_max_periods setting. Type Integer Default 0 (unlimited) client_readahead_max_periods Description Set the number of file layout periods (object size * number of stripes) that the kernel reads ahead. Overrides the client_readahead_max_bytes setting. Type Integer Default 4 client_readahead_min Description Set the minimum number bytes that the kernel reads ahead. Type Integer Default 131072 (128KB) client_snapdir Description Set the snapshot directory name. Type String Default ".snap" client_tick_interval Description Set the interval in seconds between capability renewal and other upkeep. Type Float Default 1.0 client_use_random_mds Description Choose random MDS for each request. Type Boolean Default false fuse_default_permissions Description When set to false , the ceph-fuse utility checks does its own permissions checking, instead of relying on the permissions enforcement in FUSE. Set to false together with the client acl type=posix_acl option to enable POSIX ACL. Type Boolean Default true Developer Options These options are internal. They are listed here only to complete the list of options. client_debug_getattr_caps Description Check if the reply from the MDS contains required capabilities. Type Boolean Default false client_debug_inject_tick_delay Description Add artificial delay between client ticks. Type Integer Default 0 client_inject_fixed_oldest_tid Description, Type Boolean Default false client_inject_release_failure Description, Type Boolean Default false client_trace Description The path to the trace file for all file operations. The output is designed to be used by the Ceph synthetic client. See the ceph-syn(8) manual page for details. Type String Default "" (disabled)
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-client-configuration-reference_fs
8.2. Configuring NFS Client
8.2. Configuring NFS Client The mount command mounts NFS shares on the client side. Its format is as follows: This command uses the following variables: options A comma-delimited list of mount options; for more information on valid NFS mount options, see Section 8.4, "Common NFS Mount Options" . server The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount /remote/export The file system or directory being exported from the server , that is, the directory you wish to mount /local/directory The client location where /remote/export is mounted The NFS protocol version used in Red Hat Enterprise Linux 7 is identified by the mount options nfsvers or vers . By default, mount uses NFSv4 with mount -t nfs . If the server does not support NFSv4, the client automatically steps down to a version supported by the server. If the nfsvers / vers option is used to pass a particular version not supported by the server, the mount fails. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host : /remote/export /local/directory . For more information, see man mount . If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file and the autofs service. For more information, see Section 8.2.1, "Mounting NFS File Systems Using /etc/fstab " and Section 8.3, " autofs " . 8.2.1. Mounting NFS File Systems Using /etc/fstab An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file. Example 8.1. Syntax Example The general syntax for the line in /etc/fstab is as follows: The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command mount /pub , and the mount point /pub is mounted from the server. A valid /etc/fstab entry to mount an NFS export should contain the following information: The variables server , /remote/export , /local/directory , and options are the same ones used when manually mounting an NFS share. For more information, see Section 8.2, "Configuring NFS Client" . Note The mount point /local/directory must exist on the client before /etc/fstab is read. Otherwise, the mount fails. After editing /etc/fstab , regenerate mount units so that your system registers the new configuration: Additional Resources For more information about /etc/fstab , refer to man fstab .
[ "mount -t nfs -o options server : /remote/export /local/directory", "server:/usr/local/pub /pub nfs defaults 0 0", "server : /remote/export /local/directory nfs options 0 0", "systemctl daemon-reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/nfs-clientconfig
Authorization APIs
Authorization APIs OpenShift Container Platform 4.13 Reference guide for authorization APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/index
Chapter 5. Running routes inside Red Hat Fuse Tooling
Chapter 5. Running routes inside Red Hat Fuse Tooling There are two ways to run your routes using the tooling: Section 5.1, "Running routes as a local Camel context" Section 5.2, "Running routes using Maven" 5.1. Running routes as a local Camel context Overview The simplest way to run an Apache Camel route is as a Local Camel Context . This method enables you to launch the route directly from the Project Explorer view's context menu. When you run a route from the context menu, the tooling automatically creates a runtime profile for you. You can also create a custom runtime profile for running your route. Your route runs as if it were invoked directly from the command line and uses Apache Camel's embedded Spring container. You can configure a number of the runtime parameters by editing the runtime profile. Procedure To run a route as a local Camel context: In the Project Explorer view, select a routing context file. Right-click it to open the context menu, and then select Run As Local Camel Context . Note Selecting Local Camel Context (without tests) directs the tooling to run the project without performing validation tests, which may be faster. Result The Console view displays the output generated from running the route. Related topics Section 5.3.1, "Editing a Local Camel Context runtime profile" 5.2. Running routes using Maven Overview If the project containing your route is a Maven project, you can use the m2e plug-in to run your route. Using this option, you can execute any Maven goals, before the route runs. Procedure To run a route using Maven: In the Project Explorer view, select the root of the project . Right-click it to open the context menu, and then select Run As Maven build . The first time you run the project using Maven, the Edit Configuration and launch editor opens, so you can create a Maven runtime profile. To create the runtime profile, on the Maven tab: Make sure the route directory of your Apache Camel project appears in the Base directory: field. For example, on Linux the root of your project is similar to ~/workspace/simple-router . In the Goals: field, enter camel:run . Important If you created your project using the Java DSL, enter exec:java in the Goals: field. Click Apply and then Run . Subsequent Maven runs use this profile, unless you modify it between runs. Results The Console view displays the output from the Maven run. Related topics Section 5.3.2, "Editing a Maven runtime profile" 5.3. Working with runtime profiles Red Hat Fuse Tooling stores information about the runtime environments for each project in runtime profiles . The runtime profiles keep track of such information as which Maven goals to call, the Java runtime environment to use, any system variables that need to be set, and so on. A project can have more than one runtime profile. 5.3.1. Editing a Local Camel Context runtime profile Overview A Local Camel Context runtime profile configures how Apache Camel is invoked to execute a route. A Local Camel Context runtime profile stores the name of the context file in which your routes are defined, the name of the main to invoke, the command line options passed into the JVM, the JRE to use, the classpath to use, any environment variables that need to be set, and a few other pieces of information. The runtime configuration editor for a Local Camel Context runtime profile contains the following tabs: Camel Context File - specifies the name of the new configuration and the full path of the routing context file that contains your routes. JMX - specifies JMX connection details, including the JMX URI and the user name and password (optional) to use to access it. Main - specifies the fully qualified name of the project's base directory, a few options for locating the base directory, any goals required to execute before running the route, and the version of the Maven runtime to use. JRE - specifies the JRE and command line arguments to use when starting the JVM. Refresh - specifies how Maven refreshes the project's resource files after a run terminates. Environment - specifies any environment variables that need to be set. Common - specifies how the profile is stored and the output displayed. The first time an Apache Camel route is run as a Local Camel Context , Red Hat Fuse Tooling creates for the routing context file a default runtime profile, which should not require editing. Accessing the Local Camel Context's runtime configuration editor In the Project Explorer view, select the Camel context file for which you want to edit or create a custom runtime profile. Right-click it to open the context menu, and then select Run As Run Configurations to open the Run Configurations dialog. In the context selection pane, select Local Camel Context , and then click at the top, left of the context selection pane. In the Name field, enter a new name for your runtime profile. Figure 5.1. Runtime configuration editor for Local Camel Context Setting the camel context file The Camel Context File tab has one field, Select Camel Context file... . Enter the full path to the routing context file that contains your route definitions. The Browse button accesses the Open Resource dialog, which facilitates locating the target routing context file. This dialog is preconfigured to search for files that contain Apache Camel routes. Changing the command line options By default the only command line option passed to the JVM is: If you are using a custom main class you may need to pass in different options. To do so, on the Main tab, click the Add button to enter a parameter's name and value. You can click the Add Parameter dialog's Variables... button to display a list of variables that you can select. To add or modify JVM-specific arguments, edit the VM arguments field on the JRE tab. Changing where output is sent By default, the output generated from running the route is sent to the Console view. But you can redirect it to a file instead. To redirect output to a file: Select the Common tab. In the Standard Input and Output pane, click the checkbox to the Output File: field, and then enter the path to the file where you want to send the output. The Workspace , File System , and Variables buttons facilitate building the path to the output file. Related topics Section 5.1, "Running routes as a local Camel context" 5.3.2. Editing a Maven runtime profile Overview A Maven runtime profile configures how Maven invokes Apache Camel. A Maven runtime profile stores the Maven goals to execute, any Maven profiles to use, the version of Maven to use, the JRE to use, the classpath to use, any environment variables that need to be set, and a few other pieces of information. Important The first time an Apache Camel route is run using Maven, you must create a default runtime profile for it. The runtime configuration editor for a Fuse runtime profile contains the following tabs: Main - specifies the name of the new configuration, the fully qualified name of the project's base directory, a few options for locating the base directory, any goals required to execute before running the route, and the version of the Maven runtime to use. JRE - specifies the JRE and command line arguments to use when starting the JVM. Refresh - specifies how Maven refreshes the project's resource files after a run terminates. Source - specifies the location of any additional sources that the project requires. Environment - specifies any environment variables that need to be set. Common - specifies how the profile is stored and the output displayed. Accessing the Maven runtime configuration editor In the Project Explorer view, select the root of the project for which you want to edit or create a custom runtime profile. Right-click it to open the context menu, and then select Run As Run Configurations to open the Run Configurations dialog. In the context selection pane, select Maven Build , and then click at the top, left of the context selection pane. Figure 5.2. Runtime configuration editor for Maven Changing the Maven goal The most commonly used goal when running a route is camel:run . It loads the routes into a Spring container running in its own JVM. The Apache Camel plug-in also supports a camel:embedded goal that loads the Spring container into the same JVM used by Maven. The advantage of this is that the routes should bootstrap faster. Projects based on Java DSL use the exec:java goal. If your POM contains other goals, you can change the Maven goal used by clicking the Configure... button to the Maven Runtime field on the Main tab. On the Installations dialog, you edit the Global settings for <selected_runtime> installation field. Changing the version of Maven By default, Red Hat Fuse Tooling for Eclipse uses m2e, which is embedded in Eclipse. If you want to use a different version of Maven or have a newer version installed on your development machine, you can select it from the Maven Runtime drop-down menu on the Main tab. Changing where the output is sent By default, the output from the route execution is sent to the Console view. But you can redirect it to a file instead. To redirect output to a file: Select the Common tab. Click the checkbox to the Output File: field, and then enter the path to the file where you want to send the output. The Workspace , File System , and Variables buttons facilitate building the path to the output file. Related topics Section 5.2, "Running routes using Maven"
[ "-fa context-file" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderRunning
Chapter 118. KafkaUserAuthorizationSimple schema reference
Chapter 118. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Property type Description type string Must be simple . acls AclRule array List of ACL rules which should be applied to this user.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaUserAuthorizationSimple-reference
Chapter 46. Managing host groups using Ansible playbooks
Chapter 46. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 46.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 46.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 46.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 46.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 46.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 46.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 46.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 46.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 46.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases ipa: ERROR: databases: host group not found", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-host-groups-using-ansible-playbooks_managing-users-groups-hosts
Chapter 5. Kernel parameters
Chapter 5. Kernel parameters You can modify the kernel behaviour with kernel parameters. Parameter Description BridgeNfCallArpTables Configures sysctl net.bridge.bridge-nf-call-arptables key. The default value is 1 . BridgeNfCallIp6Tables Configures sysctl net.bridge.bridge-nf-call-ip6tables key. The default value is 1 . BridgeNfCallIpTables Configures sysctl net.bridge.bridge-nf-call-iptables key. The default value is 1 . ExtraKernelModules Hash of extra kernel modules to load. ExtraKernelPackages List of extra kernel related packages to install. ExtraSysctlSettings Hash of extra sysctl settings to apply. FsAioMaxNumber The kernel allocates aio memory on demand, and this number limits the number of parallel aio requests; the only drawback of a larger limit is that a malicious guest could issue parallel requests to cause the kernel to set aside memory. Set this number at least as large as 128 * (number of virtual disks on the host) Libvirt uses a default of 1M requests to allow 8k disks, with at most 64M of kernel memory if all disks hit an aio request at the same time. The default value is 0 . InotifyIntancesMax Configures sysctl fs.inotify.max_user_instances key. The default value is 1024 . KernelDisableIPv6 Configures sysctl net.ipv6.{default/all}.disable_ipv6 keys. The default value is 0 . KernelIpForward Configures net.ipv4.ip_forward key. The default value is 1 . KernelIpNonLocalBind Configures net.ipv{4,6}.ip_nonlocal_bind key. The default value is 1 . KernelPidMax Configures sysctl kernel.pid_max key. The default value is 1048576 . NeighbourGcThreshold1 Configures sysctl net.ipv4.neigh.default.gc_thresh1 value. This is the minimum number of entries to keep in the ARP cache. The garbage collector will not run if there are fewer than this number of entries in the cache. The default value is 1024 . NeighbourGcThreshold2 Configures sysctl net.ipv4.neigh.default.gc_thresh2 value. This is the soft maximum number of entries to keep in the ARP cache. The garbage collector will allow the number of entries to exceed this for 5 seconds before collection will be performed. The default value is 2048 . NeighbourGcThreshold3 Configures sysctl net.ipv4.neigh.default.gc_thresh3 value. This is the hard maximum number of entries to keep in the ARP cache. The garbage collector will always run if there are more than this number of entries in the cache. The default value is 4096 .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_kernel-parameters_overcloud_parameters
Chapter 7. Using image streams with Kubernetes resources
Chapter 7. Using image streams with Kubernetes resources Image streams, being OpenShift Container Platform native resources, work out of the box with all the rest of native resources available in OpenShift Container Platform, such as builds or deployments. It is also possible to make them work with native Kubernetes resources, such as jobs, replication controllers, replica sets or Kubernetes deployments. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Note This feature can not be used in the default namespace, nor in any openshift- or kube- namespace. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false
[ "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/images/using-imagestreams-with-kube-resources
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/proc_providing-feedback-on-red-hat-documentation
4.327. tomcat6
4.327. tomcat6 4.327.1. RHSA-2012:0475 - Moderate: tomcat6 security update Updated tomcat6 packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Apache Tomcat is a servlet container for the Java Servlet and JavaServer Pages (JSP) technologies. Security Fixes CVE-2011-4858 It was found that the Java hashCode() method implementation was susceptible to predictable hash collisions. A remote attacker could use this flaw to cause Tomcat to use an excessive amount of CPU time by sending an HTTP request with a large number of parameters whose names map to the same hash value. This update introduces a limit on the number of parameters processed per request to mitigate this issue. The default limit is 512 for parameters and 128 for headers. These defaults can be changed by setting the org.apache.tomcat.util.http.Parameters.MAX_COUNT and org.apache.tomcat.util.http.MimeHeaders.MAX_COUNT system properties. CVE-2012-0022 It was found that Tomcat did not handle large numbers of parameters and large parameter values efficiently. A remote attacker could make Tomcat use an excessive amount of CPU time by sending an HTTP request containing a large number of parameters or large parameter values. This update introduces limits on the number of parameters and headers processed per request to address this issue. Refer to the CVE-2011-4858 description for information about the org.apache.tomcat.util.http.Parameters.MAX_COUNT and org.apache.tomcat.util.http.MimeHeaders.MAX_COUNT system properties. Red Hat would like to thank oCERT for reporting CVE-2011-4858. oCERT acknowledges Julian Walde and Alexander Klink as the original reporters of CVE-2011-4858. Users of Tomcat should upgrade to these updated packages, which correct these issues. Tomcat must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tomcat6
Chapter 1. 3scale API management
Chapter 1. 3scale API management If you deployed the Kafka Bridge on OpenShift Container Platform, you can use it with 3scale. With a plain deployment of the Kafka Bridge, there is no provision for authentication or authorization, and no support for a TLS encrypted connection to external clients. 3scale API Management can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting, and billing are available. With 3scale, you can use different types of authentication for requests from external clients wishing to access Streams for Apache Kafka. 3scale supports the following types of authentication: Standard API Keys Single randomized strings or hashes acting as an identifier and a secret token. Application Identifier and Key pairs Immutable identifier and mutable secret key strings. OpenID Connect Protocol for delegated authentication. 1.1. Kafka Bridge service discovery 3scale is integrated using service discovery, which requires that 3scale is deployed to the same OpenShift cluster as Streams for Apache Kafka and the Kafka Bridge. Your Streams for Apache Kafka Cluster Operator deployment must have the following environment variables set: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS When the Kafka Bridge is deployed, the service that exposes the REST interface of the Kafka Bridge uses the annotations and labels for discovery by 3scale. A discovery.3scale.net=true label is used by 3scale to find a service. Annotations provide information about the service. You can check your configuration in the OpenShift console by navigating to Services for the Kafka Bridge instance. Under Annotations you will see the endpoint to the OpenAPI specification for the Kafka Bridge. 1.2. 3scale APIcast gateway policies 3scale is used in conjunction with 3scale APIcast, an API gateway deployed with 3scale that provides a single point of entry for the Kafka Bridge. APIcast policies provide a mechanism to customize how the gateway operates. 3scale provides a set of standard policies for gateway configuration. You can also create your own policies. For more information on APIcast policies, see the Red Hat 3scale documentation . APIcast policies for the Kafka Bridge A sample policy configuration for 3scale integration with the Kafka Bridge is provided with the policies_config.json file, which defines: Anonymous access Header modification Routing URL rewriting Gateway policies are enabled or disabled through this file. You can use this sample as a starting point for defining your own policies. Anonymous access The anonymous access policy exposes a service without authentication, providing default credentials (for anonymous access) when a HTTP client does not provide them. The policy is not mandatory and can be disabled or removed if authentication is always needed. Header modification The header modification policy allows existing HTTP headers to be modified, or new headers added to requests or responses passing through the gateway. For 3scale integration, the policy adds headers to every request passing through the gateway from a HTTP client to the Kafka Bridge. When the Kafka Bridge receives a request for creating a new consumer, it returns a JSON payload containing a base_uri field with the URI that the consumer must use for all the subsequent requests. For example: { "instance_id": "consumer-1", "base_uri":"http://my-bridge:8080/consumers/my-group/instances/consumer1" } When using APIcast, clients send all subsequent requests to the gateway and not to the Kafka Bridge directly. So the URI requires the gateway hostname, not the address of the Kafka Bridge behind the gateway. Using header modification policies, headers are added to requests from the HTTP client so that the Kafka Bridge uses the gateway hostname. For example, by applying a Forwarded: host=my-gateway:80;proto=http header, the Kafka Bridge delivers the following to the consumer. { "instance_id": "consumer-1", "base_uri":"http://my-gateway:80/consumers/my-group/instances/consumer1" } An X-Forwarded-Path header carries the original path contained in a request from the client to the gateway. This header is strictly related to the routing policy applied when a gateway supports more than one Kafka Bridge instance. Routing A routing policy is applied when there is more than one Kafka Bridge instance. Requests must be sent to the same Kafka Bridge instance where the consumer was initially created, so a request must specify a route for the gateway to forward a request to the appropriate Kafka Bridge instance. A routing policy names each bridge instance, and routing is performed using the name. You specify the name in the KafkaBridge custom resource when you deploy the Kafka Bridge. For example, each request (using X-Forwarded-Path ) from a consumer to: http://my-gateway:80/my-bridge-1/consumers/my-group/instances/consumer1 is forwarded to: http://my-bridge-1-bridge-service:8080/consumers/my-group/instances/consumer1 URL rewriting policy removes the bridge name, as it is not used when forwarding the request from the gateway to the Kafka Bridge. URL rewriting The URL rewiring policy ensures that a request to a specific Kafka Bridge instance from a client does not contain the bridge name when forwarding the request from the gateway to the Kafka Bridge. The bridge name is not used in the endpoints exposed by the bridge. 1.3. 3scale APIcast for TLS validation You can set up APIcast for TLS validation, which requires a self-managed deployment of APIcast using a template. The apicast service is exposed as a route. You can also apply a TLS policy to the Kafka Bridge API. 1.4. Using an existing 3scale deployment If you already have 3scale deployed to OpenShift and you wish to use it with the Kafka Bridge, ensure that you have the setup described in Deploying 3scale for the Kafka Bridge .
[ "{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-bridge:8080/consumers/my-group/instances/consumer1\" }", "{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-gateway:80/consumers/my-group/instances/consumer1\" }" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_3scale_api_management_with_the_streams_for_apache_kafka_bridge/con-kafka-bridge-authentication-3scale
Chapter 107. ConnectorPlugin schema reference
Chapter 107. ConnectorPlugin schema reference Used in: KafkaConnectStatus , KafkaMirrorMaker2Status Property Property type Description class string The class of the connector plugin. type string The type of the connector plugin. The available types are sink and source . version string The version of the connector plugin.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ConnectorPlugin-reference
Chapter 1. Running Builds
Chapter 1. Running Builds After installing Builds, you can create a buildah or source-to-image build for use. You can also delete custom resources that are not required for a build. 1.1. Creating a buildah build You can create a buildah build and push the created image to the target registry. Prerequisites You have installed the Builds for Red Hat OpenShift Operator on the OpenShift Container Platform cluster. You have created a ShipwrightBuild resource. You have installed the oc CLI. Optional: You have installed the shp CLI . Procedure Create a Build resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: 1 type: Git git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: 2 name: buildah kind: ClusterBuildStrategy paramValues: 3 - name: dockerfile value: Dockerfile output: 4 image: image-registry.openshift-image-registry.svc:5000/buildah-example/sample-go-app EOF 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the dockerfile strategy parameter, specify the Dockerfile location required to build the output image. 4 The location where the built image is pushed. In this procedural example, the built image is pushed to the OpenShift Container Platform cluster internal registry. buildah-example is the name of the current project. Ensure that the specified project exists to allow the image push. Example: Using shp CLI USD shp build create buildah-golang-build \ --source-url="https://github.com/redhat-openshift-builds/samples" --source-context-dir="buildah-build" \ 1 --strategy-name="buildah" \ 2 --dockerfile="Dockerfile" \ 3 --output-image="image-registry.openshift-image-registry.svc:5000/buildah-example/go-app" 4 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the dockerfile strategy parameter, specify the Dockerfile location required to build the output image. 4 The location where the built image is pushed. In this procedural example, the built image is pushed to the OpenShift Container Platform cluster internal registry. buildah-example is the name of the current project. Ensure that the specified project exists to allow the image push. Check if the Build resource is created by using one of the CLIs: Example: Using oc CLI USD oc get builds.shipwright.io buildah-golang-build Example: Using shp CLI USD shp build list Create a BuildRun resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-golang-buildrun spec: build: name: buildah-golang-build 1 EOF 1 The spec.build.name field denotes the respective build to run, which is expected to be available in the same namespace. Example: Using shp CLI USD shp build run buildah-golang-build --follow 1 1 Optional: By using the --follow flag, you can view the build logs in the output result. Check if the BuildRun resource is created by running one of the following commands: Example: Using oc CLI USD oc get buildrun buildah-golang-buildrun Example: Using shp CLI USD shp buildrun list The BuildRun resource creates a TaskRun resource, which then creates the pods to execute build strategy steps. Verification After all the containers complete their tasks, verify the following: Check whether the pod shows the STATUS field as Completed : USD oc get pods -w Example output NAME READY STATUS RESTARTS AGE buildah-golang-buildrun-dtrg2-pod 2/2 Running 0 4s buildah-golang-buildrun-dtrg2-pod 1/2 NotReady 0 7s buildah-golang-buildrun-dtrg2-pod 0/2 Completed 0 55s Check whether the respective TaskRun resource shows the SUCCEEDED field as True : USD oc get tr Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun-dtrg2 True Succeeded 11m 8m51s Check whether the respective BuildRun resource shows the SUCCEEDED field as True : USD oc get br Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun True Succeeded 13m 11m During verification, if a build run fails, you can check the status.failureDetails field in your BuildRun resource to identify the exact point where the failure happened in the pod or container. Note The pod might switch to a NotReady state because one of the containers has completed its task. This is an expected behavior. Validate whether the image has been pushed to the registry that is specified in the build.spec.output.image field. You can try to pull the image by running the following command from a node that can access the internal registry: USD podman pull image-registry.openshift-image-registry.svc:5000/<project>/<image> 1 1 The project name and image name used when creating the Build resource. For example, you can use buildah-example as the project name and sample-go-app as the image name. 1.2. Creating a source-to-image build You can create a source-to-image build and push the created image to a custom Quay repository. Prerequisites You have installed the Builds for Red Hat OpenShift Operator on the OpenShift Container Platform cluster. You have created a ShipwrightBuild resource. You have installed the oc CLI. Optional: You have installed the shp CLI . Procedure Create a Build resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: 1 type: Git type: Git git: url: https://github.com/redhat-openshift-builds/samples contextDir: s2i-build/nodejs strategy: 2 name: source-to-image kind: ClusterBuildStrategy paramValues: 3 - name: builder-image value: quay.io/centos7/nodejs-12-centos7:master output: image: quay.io/<repo>/s2i-nodejs-example 4 pushSecret: registry-credential 5 EOF 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the builder-image strategy parameter, specify the builder image location required to build the output image. 4 The location where the built image is pushed. You can push the built image to a custom Quay.io repository. Replace repo with a valid Quay.io organization or your Quay user name. 5 The secret name that stores the credentials for pushing container images. To generate a secret of the type docker-registry for authentication, see "Authentication to container registries". Example: Using shp CLI USD shp build create s2i-nodejs-build \ --source-url="https://github.com/redhat-openshift-builds/samples" --source-context-dir="s2i-build/nodejs" \ 1 --strategy-name="source-to-image" \ 2 --builder-image="quay.io/centos7/nodejs-12-centos7" \ 3 --output-image="quay.io/<repo>/s2i-nodejs-example" \ 4 --output-credentials-secret="registry-credential" 5 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the builder-image strategy parameter, specify the builder image location required to build the output image. 4 The location where the built image is pushed. You can push the built image to a custom Quay.io repository. Replace repo with a valid Quay.io organization or your Quay user name. 5 The secret name that stores the credentials for pushing container images. To generate a secret of the type docker-registry for authentication, see "Authentication to container registries". Check if the Build resource is created by using one of the CLIs: Example: Using oc CLI USD oc get builds.shipwright.io s2i-nodejs-build Example: Using shp CLI USD shp build list Create a BuildRun resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: s2i-nodejs-buildrun spec: build: name: s2i-nodejs-build 1 EOF 1 The spec.build.name field denotes the respective build to run, which is expected to be available in the same namespace. Example: Using shp CLI USD shp build run s2i-nodejs-build --follow 1 1 Optional: By using the --follow flag, you can view the build logs in the output result. Check if the BuildRun resource is created by running one of the following commands: Example: Using oc CLI USD oc get buildrun s2i-nodejs-buildrun Example: Using shp CLI USD shp buildrun list The BuildRun resource creates a TaskRun resource, which then creates the pods to execute build strategy steps. Verification After all the containers complete their tasks, verify the following: Check whether the pod shows the STATUS field as Completed : USD oc get pods -w Example output NAME READY STATUS RESTARTS AGE s2i-nodejs-buildrun-phxxm-pod 2/2 Running 0 10s s2i-nodejs-buildrun-phxxm-pod 1/2 NotReady 0 14s s2i-nodejs-buildrun-phxxm-pod 0/2 Completed 0 2m Check whether the respective TaskRun resource shows the SUCCEEDED field as True : USD oc get tr Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun-phxxm True Succeeded 2m39s 13s Check whether the respective BuildRun resource shows the SUCCEEDED field as True : USD oc get br Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun True Succeeded 2m41s 15s During verification, if a build run fails, you can check the status.failureDetails field in your BuildRun resource to identify the exact point where the failure happened in the pod or container. Note The pod might switch to a NotReady state because one of the containers has completed its task. This is an expected behavior. Validate whether the image has been pushed to the registry that is specified in the build.spec.output.image field. You can try to pull the image by running the following command after logging in to the registry: USD podman pull quay.io/<repo>/<image> 1 1 The repository name and image name used when creating the Build resource. For example, you can use s2i-nodejs-example as the image name. Additional resources Authentication to container registries 1.3. Viewing logs You can view the logs of a build run to identify any runtime errors and to resolve them. Prerequisites You have installed the oc CLI. Optional: You have installed the shp CLI. Procedure View logs of a build run by using one of the CLIs: Using oc CLI USD oc logs <buildrun_resource_name> Using shp CLI USD shp buildrun logs <buildrun_resource_name> 1.4. Deleting a resource You can delete a Build , BuildRun , or BuildStrategy resource if it is not required in your project. Prerequisites You have installed the oc CLI. Optional: You have installed the shp CLI. Procedure Delete a Build resource by using one of the CLIs: Using oc CLI USD oc delete builds.shipwright.io <build_resource_name> Using shp CLI USD shp build delete <build_resource_name> Delete a BuildRun resource by using one of the CLIs: Using oc CLI USD oc delete buildrun <buildrun_resource_name> Using shp CLI USD shp buildrun delete <buildrun_resource_name> Delete a BuildStrategy resource by running the following command: Using oc CLI USD oc delete buildstrategies <buildstartegy_resource_name> 1.5. Additional resources Authentication to container registries Creating a ShipwrightBuild resource by using the web console
[ "oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: 1 type: Git git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: 2 name: buildah kind: ClusterBuildStrategy paramValues: 3 - name: dockerfile value: Dockerfile output: 4 image: image-registry.openshift-image-registry.svc:5000/buildah-example/sample-go-app EOF", "shp build create buildah-golang-build --source-url=\"https://github.com/redhat-openshift-builds/samples\" --source-context-dir=\"buildah-build\" \\ 1 --strategy-name=\"buildah\" \\ 2 --dockerfile=\"Dockerfile\" \\ 3 --output-image=\"image-registry.openshift-image-registry.svc:5000/buildah-example/go-app\" 4", "oc get builds.shipwright.io buildah-golang-build", "shp build list", "oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-golang-buildrun spec: build: name: buildah-golang-build 1 EOF", "shp build run buildah-golang-build --follow 1", "oc get buildrun buildah-golang-buildrun", "shp buildrun list", "oc get pods -w", "NAME READY STATUS RESTARTS AGE buildah-golang-buildrun-dtrg2-pod 2/2 Running 0 4s buildah-golang-buildrun-dtrg2-pod 1/2 NotReady 0 7s buildah-golang-buildrun-dtrg2-pod 0/2 Completed 0 55s", "oc get tr", "NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun-dtrg2 True Succeeded 11m 8m51s", "oc get br", "NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun True Succeeded 13m 11m", "podman pull image-registry.openshift-image-registry.svc:5000/<project>/<image> 1", "oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: 1 type: Git type: Git git: url: https://github.com/redhat-openshift-builds/samples contextDir: s2i-build/nodejs strategy: 2 name: source-to-image kind: ClusterBuildStrategy paramValues: 3 - name: builder-image value: quay.io/centos7/nodejs-12-centos7:master output: image: quay.io/<repo>/s2i-nodejs-example 4 pushSecret: registry-credential 5 EOF", "shp build create s2i-nodejs-build --source-url=\"https://github.com/redhat-openshift-builds/samples\" --source-context-dir=\"s2i-build/nodejs\" \\ 1 --strategy-name=\"source-to-image\" \\ 2 --builder-image=\"quay.io/centos7/nodejs-12-centos7\" \\ 3 --output-image=\"quay.io/<repo>/s2i-nodejs-example\" \\ 4 --output-credentials-secret=\"registry-credential\" 5", "oc get builds.shipwright.io s2i-nodejs-build", "shp build list", "oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: s2i-nodejs-buildrun spec: build: name: s2i-nodejs-build 1 EOF", "shp build run s2i-nodejs-build --follow 1", "oc get buildrun s2i-nodejs-buildrun", "shp buildrun list", "oc get pods -w", "NAME READY STATUS RESTARTS AGE s2i-nodejs-buildrun-phxxm-pod 2/2 Running 0 10s s2i-nodejs-buildrun-phxxm-pod 1/2 NotReady 0 14s s2i-nodejs-buildrun-phxxm-pod 0/2 Completed 0 2m", "oc get tr", "NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun-phxxm True Succeeded 2m39s 13s", "oc get br", "NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun True Succeeded 2m41s 15s", "podman pull quay.io/<repo>/<image> 1", "oc logs <buildrun_resource_name>", "shp buildrun logs <buildrun_resource_name>", "oc delete builds.shipwright.io <build_resource_name>", "shp build delete <build_resource_name>", "oc delete buildrun <buildrun_resource_name>", "shp buildrun delete <buildrun_resource_name>", "oc delete buildstrategies <buildstartegy_resource_name>" ]
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/work_with_builds/running-builds
Chapter 11. Managing container storage interface (CSI) component placements
Chapter 11. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. For more information, see https://access.redhat.com/solutions/4827161 . Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes.
[ "oc edit configmap rook-ceph-operator-config -n openshift-storage", "oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml", "apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]", "oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>", "oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/managing-container-storage-interface-component-placements_rhodf
12.2. Translators
12.2. Translators A translator acts as the bridge between JBoss Data Virtualization and an external system, which is most commonly accessed through a JCA resource adapter. Translators indicate what SQL constructs are supported and what import metadata can be read from particular datasources. A translator is typically paired with a particular JCA resource adapter. A JCA resource adapter is not needed in instances where features such as pooling, environment dependent configuration management, or advanced security handling are not needed. Note See Red Hat JBoss Data Virtualization Development Guide: Server Development for more information on developing custom translators and JCA resource adapters. See the Red Hat JBoss Data Virtualization Administration and Configuration Guide and the examples in EAP_HOME /docs/teiid/datasources for more information about configuring resource adapters.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/translators1
5.260. python-repoze-who
5.260. python-repoze-who 5.260.1. RHEA-2012:0982 - python-repoze-who enhancement update An updated python-repoze-who package that adds one enhancement is now available for Red Hat Enterprise Linux 6. The python-repoze-who package provides an identification and authentication framework for arbitrary Python WSGI applications and acts as WSGI middleware. The python-repoze-who package has been upgraded to upstream version 1.0.18, which adds support for enforcing timeouts for cookies issued by the auth_tkt plugin via the "timeout" and "reissue_time" config parameters. (BZ# 639075 ) All users of python-repoze-who are advised to upgrade to this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/python-repoze-who
2.2. A Three-Tier keepalived Load Balancer Configuration
2.2. A Three-Tier keepalived Load Balancer Configuration Figure 2.2, "A Three-Tier Load Balancer Configuration" shows a typical three-tier Keepalived Load Balancer topology. In this example, the active LVS router routes the requests from the Internet to the pool of real servers. Each of the real servers then accesses a shared data source over the network. Figure 2.2. A Three-Tier Load Balancer Configuration This configuration is ideal for busy FTP servers, where accessible data is stored on a central, highly available server and accessed by each real server by means of an exported NFS directory or Samba share. This topology is also recommended for websites that access a central, highly available database for transactions. Additionally, using an active-active configuration with the Load Balancer, administrators can configure one high-availability cluster to serve both of these roles simultaneously. The third tier in the above example does not have to use the Load Balancer, but failing to use a highly available solution would introduce a critical single point of failure.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-cm-VSA
Chapter 347. Thrift DataFormat
Chapter 347. Thrift DataFormat Available as of Camel version 2.20 Camel provides a Data Format to serialize between Java and the Apache Thrift . The project's site details why you may wish to https://thrift.apache.org/ . Apache Thrift is language-neutral and platform-neutral, so messages produced by your Camel routes may be consumed by other language implementations. Apache Thrift Implementation 347.1. Thrift Options The Thrift dataformat supports 3 options, which are listed below. Name Default Java Type Description instanceClass String Name of class to use when unarmshalling contentTypeFormat binary String Defines a content type format in which thrift message will be serialized/deserialized from(to) the Java been. The format can either be native or json for either native binary thrift, json or simple json fields representation. The default value is binary. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 347.2. Spring Boot Auto-Configuration The component supports 7 options, which are listed below. Name Description Default Type camel.component.thrift.enabled Whether to enable auto configuration of the thrift component. This is enabled by default. Boolean camel.component.thrift.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.thrift.use-global-ssl-context-parameters Determine if the thrift component is using global SSL context parameters false Boolean camel.dataformat.thrift.content-type-format Defines a content type format in which thrift message will be serialized/deserialized from(to) the Java been. The format can either be native or json for either native binary thrift, json or simple json fields representation. The default value is binary. binary String camel.dataformat.thrift.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.thrift.enabled Whether to enable auto configuration of the thrift data format. This is enabled by default. Boolean camel.dataformat.thrift.instance-class Name of class to use when unarmshalling String ND 347.3. Content type format It's possible to parse JSON message to convert it to the Thrift format and unparse it back using native util converter. To use this option, set contentTypeFormat value to 'json' or call thrift with second parameter. If default instance is not specified, always use native binary Thrift format. The simple JSON format is write-only (marshal) and produces a simple output format suitable for parsing by scripting languages. The sample code shows below: from("direct:marshal") .unmarshal() .thrift("org.apache.camel.dataformat.thrift.generated.Work", "json") .to("mock:reverse"); 347.4. Thrift overview This quick overview of how to use Thrift. For more detail see the complete tutorial 347.5. Defining the thrift format The first step is to define the format for the body of your exchange. This is defined in a .thrift file as so: tutorial.thrift namespace java org.apache.camel.dataformat.thrift.generated enum Operation { ADD = 1, SUBTRACT = 2, MULTIPLY = 3, DIVIDE = 4 } struct Work { 1: i32 num1 = 0, 2: i32 num2, 3: Operation op, 4: optional string comment, } 347.6. Generating Java classes The Apache Thrift provides a compiler which will generate the Java classes for the format we defined in our .thrift file. You can also run the compiler for any additional supported languages you require manually. thrift -r --gen java -out ../java/ ./tutorial-dataformat.thrift This will generate separate Java class for each type defined in .thrift file, i.e. struct or enum. The generated classes implement org.apache.thrift.TBase which is required by the serialization mechanism. For this reason it important that only these classes are used in the body of your exchanges. Camel will throw an exception on route creation if you attempt to tell the Data Format to use a class that does not implement org.apache.thrift.TBase. 347.7. Java DSL You can use create the ThriftDataFormat instance and pass it to Camel DataFormat marshal and unmarshal API like this. ThriftDataFormat format = new ThriftDataFormat(new Work()); from("direct:in").marshal(format); from("direct:back").unmarshal(format).to("mock:reverse"); Or use the DSL thrift() passing the unmarshal default instance or default instance class name like this. // You don't need to specify the default instance for the thrift marshaling from("direct:marshal").marshal().thrift(); from("direct:unmarshalA").unmarshal() .thrift("org.apache.camel.dataformat.thrift.generated.Work") .to("mock:reverse"); from("direct:unmarshalB").unmarshal().thrift(new Work()).to("mock:reverse"); 347.8. Spring DSL The following example shows how to use Thrift to unmarshal using Spring configuring the thrift data type <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <unmarshal> <thrift instanceClass="org.apache.camel.dataformat.thrift.generated.Work" /> </unmarshal> <to uri="mock:result"/> </route> </camelContext> 347.9. Dependencies To use Thrift in your camel routes you need to add the a dependency on camel-thrift which implements this data format. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-thrift</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>
[ "from(\"direct:marshal\") .unmarshal() .thrift(\"org.apache.camel.dataformat.thrift.generated.Work\", \"json\") .to(\"mock:reverse\");", "namespace java org.apache.camel.dataformat.thrift.generated enum Operation { ADD = 1, SUBTRACT = 2, MULTIPLY = 3, DIVIDE = 4 } struct Work { 1: i32 num1 = 0, 2: i32 num2, 3: Operation op, 4: optional string comment, }", "ThriftDataFormat format = new ThriftDataFormat(new Work()); from(\"direct:in\").marshal(format); from(\"direct:back\").unmarshal(format).to(\"mock:reverse\");", "// You don't need to specify the default instance for the thrift marshaling from(\"direct:marshal\").marshal().thrift(); from(\"direct:unmarshalA\").unmarshal() .thrift(\"org.apache.camel.dataformat.thrift.generated.Work\") .to(\"mock:reverse\"); from(\"direct:unmarshalB\").unmarshal().thrift(new Work()).to(\"mock:reverse\");", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <unmarshal> <thrift instanceClass=\"org.apache.camel.dataformat.thrift.generated.Work\" /> </unmarshal> <to uri=\"mock:result\"/> </route> </camelContext>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-thrift</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/thrift-dataformat
probe::ioblock_trace.end
probe::ioblock_trace.end Name probe::ioblock_trace.end - Fires whenever a block I/O transfer is complete. Synopsis Values None Description name - name of the probe point q - request queue on which this bio was queued. devname - block device name ino - i-node number of the mapped file bytes_done - number of bytes transferred sector - beginning sector for the entire bio flags - see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported rw - binary trace for read/write request vcnt - bio vector count which represents number of array element (page, offset, length) which makes up this I/O request idx - offset into the bio vector array phys_segments - number of segments in this bio after physical address coalescing is performed. size - total size in bytes bdev - target block device bdev_contains - points to the device object which contains the partition (when bio structure represents a partition) p_start_sect - points to the start sector of the partition structure of the device Context The process signals the transfer is done.
[ "ioblock_trace.end" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioblock-trace-end
Chapter 14. Porting containers to systemd using Podman
Chapter 14. Porting containers to systemd using Podman Podman (Pod Manager) is a simple daemonless tool fully featured container engine. Podman provides a Docker-CLI comparable command line that makes the transition from other container engines easier and enables the management of pods, containers, and images. Originally, Podman was not designed to provide an entire Linux system or manage services, such as start-up order, dependency checking, and failed service recovery. systemd was responsible for a complete system initialization. Due to Red Hat integrating containers with systemd , you can manage OCI and Docker-formatted containers built by Podman in the same way as other services and features are managed in a Linux system. You can use the systemd initialization service to work with pods and containers. With systemd unit files, you can: Set up a container or pod to start as a systemd service. Define the order in which the containerized service runs and check for dependencies (for example making sure another service is running, a file is available or a resource is mounted). Control the state of the systemd system using the systemctl command. You can generate portable descriptions of containers and pods by using systemd unit files. 14.1. Auto-generating a systemd unit file using Quadlets With Quadlet, you describe how to run a container in a format that is very similar to regular systemd unit files. The container descriptions focus on the relevant container details and hide technical details of running containers under systemd . Create the <CTRNAME> .container unit file in one of the following directories: For root users: /usr/share/containers/systemd/ or /etc/containers/systemd/ For rootless users: USDHOME/.config/containers/systemd/ , USDXDG_CONFIG_HOME/containers/systemd/, /etc/containers/systemd/users/USD(UID) , or /etc/containers/systemd/users/ Note Quadlet is available beginning with Podman v4.6. Prerequisites The container-tools meta-package is installed. Procedure Create the mysleep.container unit file: In the [Container] section you must specify: Image - container mage you want to tun Exec - the command you want to run inside the container This enables you to use all other fields specified in a systemd unit file. Create the mysleep.service based on the mysleep.container file: Optional: Check the status of the mysleep.service : Start the mysleep.service : Verification Check the status of the mysleep.service : List all containers: Note that the name of the created container consists of the following elements: a systemd- prefix a name of the systemd unit, that is systemd-mysleep This naming helps to distinguish common containers from containers running in systemd units. It also helps to determine which unit a container runs in. If you want to change the name of the container, use the ContainerName field in the [Container] section. Additional resources Make systemd better for Podman with Quadlet Quadlet upstream documentation 14.2. Enabling systemd services When enabling the service, you have different options. Procedure Enable the service: To enable a service at system start, no matter if user is logged in or not, enter: You have to copy the systemd unit files to the /etc/systemd/system directory. To start a service at user login and stop it at user logout, enter: You have to copy the systemd unit files to the USDHOME/.config/systemd/user directory. To enable users to start a service at system start and persist over logouts, enter: Additional resources systemctl and loginctl man pages on your system Enabling a system service to start at boot 14.3. Auto-starting containers using systemd You can control the state of the systemd system and service manager using the systemctl command. You can enable, start, stop the service as a non-root user. To install the service as a root user, omit the --user option. Prerequisites The container-tools meta-package is installed. Procedure Reload systemd manager configuration: Enable the service container.service and start it at boot time: Start the service immediately: Check the status of the service: You can check if the service is enabled using the systemctl is-enabled container.service command. Verification List containers that are running or have exited: Note To stop container.service , enter: Additional resources systemctl man page on your system Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.4. Advantages of using Quadlets over the podman generate systemd command You can use the Quadlets tool, which describes how to run a container in a format similar to regular systemd unit files. Note Quadlet is available beginning with Podman v4.6. Quadlets have many advantages over generating unit files using the podman generate systemd command, such as: Easy to maintain : The container descriptions focus on the relevant container details and hide technical details of running containers under systemd . Automatically updated : Quadlets do not require manually regenerating unit files after an update. If a newer version of Podman is released, your service is automatically updated when the systemclt daemon-reload command is executed, for example, at boot time. Simplified workflow : Thanks to the simplified syntax, you can create Quadlet files from scratch and deploy them anywhere. Support standard systemd options : Quadlet extends the existing systemd-unit syntax with new tables, for example, a table to configure a container. Note Quadlet supports a subset of Kubernetes YAML capabilities. For more information, see the support matrix of supported YAML fields . You can generate the YAML files by using one of the following tools: Podman: podman generate kube command OpenShift: oc generate command with the --dry-run option Kubernetes: kubectl create command with the --dry-run option Quadlet supports these unit file types: Container units : Used to manage containers by running the podman run command. File extension: .container Section name: [Container] Required fields: Image describing the container image the service runs Kube units : Used to manage containers defined in Kubernetes YAML files by running the podman kube play command. File extension: .kube Section name: [Kube] Required fields: Yaml defining the path to the Kubernetes YAML file Network units : Used to create Podman networks that may be referenced in .container or .kube files. File extension: .network Section name: [Network] Required fields: None Volume units : Used to create Podman volumes that may be referenced in .container files. File extension: .volume Section name: [Volume] Required fields: None Additional resources Quadlet upstream documentation 14.5. Generating a systemd unit file using Podman Podman allows systemd to control and manage container processes. You can generate a systemd unit file for the existing containers and pods using podman generate systemd command. It is recommended to use podman generate systemd because the generated units files change frequently (via updates to Podman) and the podman generate systemd ensures that you get the latest version of unit files. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools meta-package is installed. Procedure Create a container (for example myubi ): Use the container name or ID to generate the systemd unit file and direct it into the ~/.config/systemd/user/container-myubi.service file: Verification Display the content of generated systemd unit file: The Restart=on-failure line sets the restart policy and instructs systemd to restart when the service cannot be started or stopped cleanly, or when the process exits non-zero. The ExecStart line describes how we start the container. The ExecStop line describes how we stop and remove the container. Additional resources Running containers with Podman and shareable systemd services 14.6. Automatically generating a systemd unit file using Podman By default, Podman generates a unit file for existing containers or pods. You can generate more portable systemd unit files using the podman generate systemd --new . The --new flag instructs Podman to generate unit files that create, start and remove containers. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools meta-package is installed. Procedure Pull the image you want to use on your system. For example, to pull the httpd-24 image: Optional: List all images available on your system: Create the httpd container: Optional: Verify the container has been created: Generate a systemd unit file for the httpd container: Display the content of the generated container-httpd.service systemd unit file: Note Unit files generated using the --new option do not expect containers and pods to exist. Therefore, they perform the podman run command when starting the service (see the ExecStart line) instead of the podman start command. For example, see section Generating a systemd unit file using Podman . The podman run command uses the following command-line options: The --conmon-pidfile option points to a path to store the process ID for the conmon process running on the host. The conmon process terminates with the same exit status as the container, which allows systemd to report the correct service status and restart the container if needed. The --cidfile option points to the path that stores the container ID. The %t is the path to the run time directory root, for example /run/user/USDUserID . The %n is the full name of the service. Copy unit files to /etc/systemd/system for installing them as a root user: Enable and start the container-httpd.service : Verification Check the status of the container-httpd.service : Additional resources Improved Systemd Integration with Podman 2.0 Enabling a system service to start at boot 14.7. Automatically starting pods using systemd You can start multiple containers as systemd services. Note that the systemctl command should only be used on the pod and you should not start or stop containers individually via systemctl , as they are managed by the pod service along with the internal infra-container. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools meta-package is installed. Procedure Create an empty pod, for example named systemd-pod : Optional: List all pods: Create two containers in the empty pod. For example, to create container0 and container1 in systemd-pod : Optional: List all pods and containers associated with them: Generate the systemd unit file for the new pod: Note that three systemd unit files are generated, one for the systemd-pod pod and two for the containers container0 and container1 . Display pod-systemd-pod.service unit file: The Requires line in the [Unit] section defines dependencies on container-container0.service and container-container1.service unit files. Both unit files will be activated. The ExecStart and ExecStop lines in the [Service] section start and stop the infra-container, respectively. Display container-container0.service unit file: The BindsTo line line in the [Unit] section defines the dependency on the pod-systemd-pod.service unit file The ExecStart and ExecStop lines in the [Service] section start and stop the container0 respectively. Display container-container1.service unit file: Copy all the generated files to USDHOME/.config/systemd/user for installing as a non-root user: Enable the service and start at user login: Note that the service stops at user logout. Verification Check if the service is enabled: Additional resources podman-create , podman-generate-systemd , and systemctl man pages on your system Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.8. Automatically updating containers using Podman The podman auto-update command allows you to automatically update containers according to their auto-update policy. The podman auto-update command updates services when the container image is updated on the registry. To use auto-updates, containers must be created with the --label "io.containers.autoupdate=image" label and run in a systemd unit generated by podman generate systemd --new command. Podman searches for running containers with the "io.containers.autoupdate" label set to "image" and communicates to the container registry. If the image has changed, Podman restarts the corresponding systemd unit to stop the old container and create a new one with the new image. As a result, the container, its environment, and all dependencies, are restarted. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools meta-package is installed. Procedure Start a myubi container based on the registry.access.redhat.com/ubi9/ubi-init image: Optional: List containers that are running or have exited: Generate a systemd unit file for the myubi container: Copy unit files to /usr/lib/systemd/system for installing it as a root user: Reload systemd manager configuration: Start and check the status of a container: Auto-update the container: Additional resources Improved Systemd Integration with Podman 2.0 Running containers with Podman and shareable systemd services Enabling a system service to start at boot 14.9. Automatically updating containers using systemd As mentioned in section Automatically updating containers using Podman , you can update the container using the podman auto-update command. It integrates into custom scripts and can be invoked when needed. Another way to auto update the containers is to use the pre-installed podman-auto-update.timer and podman-auto-update.service systemd service. The podman-auto-update.timer can be configured to trigger auto updates at a specific date or time. The podman-auto-update.service can further be started by the systemctl command or be used as a dependency by other systemd services. As a result, auto updates based on time and events can be triggered in various ways to meet individual needs and use cases. Note Starting with Podman v4.6, you can use the Quadlets that describe how to run a container in a format similar to regular systemd unit files and hides the complexity of running containers under systemd . Prerequisites The container-tools meta-package is installed. Procedure Display the podman-auto-update.service unit file: Display the podman-auto-update.timer unit file: In this example, the podman auto-update command is launched daily at midnight. Enable the podman-auto-update.timer service at system start: Start the systemd service: Optional: List all timers: You can see that podman-auto-update.timer activates the podman-auto-update.service . Additional resources Improved Systemd Integration with Podman 2.0 Running containers with Podman and shareable systemd services Enabling a system service to start at boot
[ "cat USDHOME/.config/containers/systemd/mysleep.container [Unit] Description=The sleep container After=local-fs.target [Container] Image=registry.access.redhat.com/ubi9-minimal:latest Exec=sleep 1000 Start by default on boot WantedBy=multi-user.target default.target", "systemctl --user daemon-reload", "systemctl --user status mysleep.service ○ mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: inactive (dead)", "systemctl --user start mysleep.service", "systemctl --user status mysleep.service ● mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: active (running) since Thu 2023-02-09 18:07:23 EST; 2s ago Main PID: 265651 (conmon) Tasks: 3 (limit: 76815) Memory: 1.6M CPU: 94ms CGroup:", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 421c8293fc1b registry.access.redhat.com/ubi9-minimal:latest sleep 1000 30 seconds ago Up 10 seconds ago systemd-mysleep", "systemctl enable <service>", "systemctl --user enable <service>", "loginctl enable-linger <username>", "systemctl --user daemon-reload", "systemctl --user enable container.service", "systemctl --user start container.service", "systemctl --user status container.service ● container.service - Podman container.service Loaded: loaded (/home/user/.config/systemd/user/container.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:56:57 CEST; 8s ago Docs: man:podman-generate-systemd(1) Process: 80602 ExecStart=/usr/bin/podman run --conmon-pidfile //run/user/1000/container.service-pid --cidfile //run/user/1000/container.service-cid -d ubi9-minimal:> Process: 80601 ExecStartPre=/usr/bin/rm -f //run/user/1000/container.service-pid //run/user/1000/container.service-cid (code=exited, status=0/SUCCESS) Main PID: 80617 (conmon) CGroup: /user.slice/user-1000.slice/[email protected]/container.service ├─ 2870 /usr/bin/podman ├─80612 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-> ├─80614 /usr/bin/fuse-overlayfs -o lowerdir=/home/user/.local/share/containers/storage/overlay/l/YJSPGXM2OCDZPLMLXJOW3NRF6Q:/home/user/.local/share/contain> ├─80617 /usr/bin/conmon --api-version 1 -c cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa -u cbc75d6031508dfd3d78a74a03e4ace1732b51223e72> └─cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa └─80626 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1d", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20988d59920 registry.access.redhat.com/ubi9-minimal:latest top 12 seconds ago Up 11 seconds ago funny_zhukovsky", "systemctl --user stop container.service", "podman create --name myubi registry.access.redhat.com/ubi9:latest sleep infinity 0280afe98bb75a5c5e713b28de4b7c5cb49f156f1cce4a208f13fee2f75cb453", "podman generate systemd --name myubi > ~/.config/systemd/user/container-myubi.service", "cat ~/.config/systemd/user/container-myubi.service container-myubi.service autogenerated by Podman 3.3.1 Wed Sep 8 20:34:46 CEST 2021 [Unit] Description=Podman container-myubi.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start myubi ExecStop=/usr/bin/podman stop -t 10 myubi ExecStopPost=/usr/bin/podman stop -t 10 myubi PIDFile=/run/user/1000/containers/overlay-containers/9683103f58a32192c84801f0be93446cb33c1ee7d9cdda225b78049d7c5deea4/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "podman pull registry.access.redhat.com/ubi9/httpd-24", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9/httpd-24 latest 8594be0a0b57 2 weeks ago 462 MB", "podman create --name httpd -p 8080:8080 registry.access.redhat.com/ubi9/httpd-24 cdb9f981cf143021b1679599d860026b13a77187f75e46cc0eac85293710a4b1", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cdb9f981cf14 registry.access.redhat.com/ubi9/httpd-24:latest /usr/bin/run-http... 5 minutes ago Created 0.0.0.0:8080->8080/tcp httpd", "podman generate systemd --new --files --name httpd /root/container-httpd.service", "cat /root/container-httpd.service container-httpd.service autogenerated by Podman 3.3.1 Wed Sep 8 20:41:44 CEST 2021 [Unit] Description=Podman container-httpd.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=%t/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStartPre=/bin/rm -f %t/%n.ctr-id ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm -d --replace --name httpd -p 8080:8080 registry.access.redhat.com/ubi9/httpd-24 ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id Type=notify NotifyAccess=all [Install] WantedBy=multi-user.target default.target", "cp -Z container-httpd.service /etc/systemd/system", "systemctl daemon-reload systemctl enable --now container-httpd.service Created symlink /etc/systemd/system/multi-user.target.wants/container-httpd.service /etc/systemd/system/container-httpd.service. Created symlink /etc/systemd/system/default.target.wants/container-httpd.service /etc/systemd/system/container-httpd.service.", "systemctl status container-httpd.service ● container-httpd.service - Podman container-httpd.service Loaded: loaded (/etc/systemd/system/container-httpd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 09:53:40 EDT; 1min 5s ago Docs: man:podman-generate-systemd(1) Process: 493317 ExecStart=/usr/bin/podman run --conmon-pidfile /run/container-httpd.pid --cidfile /run/container-httpd.ctr-id --cgroups=no-conmon -d --repla> Process: 493315 ExecStartPre=/bin/rm -f /run/container-httpd.pid /run/container-httpd.ctr-id (code=exited, status=0/SUCCESS) Main PID: 493435 (conmon)", "podman pod create --name systemd-pod 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 11d4646ba41b systemd-pod Created 40 seconds ago 1 8a428b257111 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman create --pod systemd-pod --name container0 registry.access.redhat.com/ubi 9 top podman create --pod systemd-pod --name container1 registry.access.redhat.com/ubi 9 top", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 24666f47d9b2 registry.access.redhat.com/ubi9:latest top 3 minutes ago Created container0 3130f724e229 systemd-pod 56eb1bf0cdfe k8s.gcr.io/pause:3.2 4 minutes ago Created 3130f724e229-infra 3130f724e229 systemd-pod 62118d170e43 registry.access.redhat.com/ubi9:latest top 3 seconds ago Created container1 3130f724e229 systemd-pod", "podman generate systemd --files --name systemd-pod /home/user1/pod-systemd-pod.service /home/user1/container-container0.service /home/user1/container-container1.service", "cat pod-systemd-pod.service pod-systemd-pod.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman pod-systemd-pod.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor= Requires=container-container0.service container-container1.service Before=container-container0.service container-container1.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start bcb128965b8e-infra ExecStop=/usr/bin/podman stop -t 10 bcb128965b8e-infra ExecStopPost=/usr/bin/podman stop -t 10 bcb128965b8e-infra PIDFile=/run/user/1000/containers/overlay-containers/1dfdcf20e35043939ea3f80f002c65c00d560e47223685dbc3230e26fe001b29/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container0.service container-container0.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman container-container0.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers BindsTo=pod-systemd-pod.service After=pod-systemd-pod.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start container0 ExecStop=/usr/bin/podman stop -t 10 container0 ExecStopPost=/usr/bin/podman stop -t 10 container0 PIDFile=/run/user/1000/containers/overlay-containers/4bccd7c8616ae5909b05317df4066fa90a64a067375af5996fdef9152f6d51f5/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container1.service", "cp pod-systemd-pod.service container-container0.service container-container1.service USDHOME/.config/systemd/user", "systemctl enable --user pod-systemd-pod.service Created symlink /home/user1/.config/systemd/user/multi-user.target.wants/pod-systemd-pod.service /home/user1/.config/systemd/user/pod-systemd-pod.service. Created symlink /home/user1/.config/systemd/user/default.target.wants/pod-systemd-pod.service /home/user1/.config/systemd/user/pod-systemd-pod.service.", "systemctl is-enabled pod-systemd-pod.service enabled", "podman run --label \"io.containers.autoupdate=image\" --name myubi -dt registry.access.redhat.com/ubi9/ubi-init top bc219740a210455fa27deacc96d50a9e20516492f1417507c13ce1533dbdcd9d", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76465a5e2933 registry.access.redhat.com/9/ubi-init:latest top 24 seconds ago Up 23 seconds ago myubi", "podman generate systemd --new --files --name myubi /root/container-myubi.service", "cp -Z ~/container-myubi.service /usr/lib/systemd/system", "systemctl daemon-reload", "systemctl start container-myubi.service systemctl status container-myubi.service", "podman auto-update", "cat /usr/lib/systemd/system/podman-auto-update.service [Unit] Description=Podman auto-update service Documentation=man:podman-auto-update(1) Wants=network.target After=network-online.target [Service] Type=oneshot ExecStart=/usr/bin/podman auto-update [Install] WantedBy=multi-user.target default.target", "cat /usr/lib/systemd/system/podman-auto-update.timer [Unit] Description=Podman auto-update timer [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target", "systemctl enable podman-auto-update.timer", "systemctl start podman-auto-update.timer", "systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2020-12-09 00:00:00 CET 9h left n/a n/a podman-auto-update.timer podman-auto-update.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/assembly_porting-containers-to-systemd-using-podman_building-running-and-managing-containers
Chapter 8. Direct Migration Requirements
Chapter 8. Direct Migration Requirements Direct Migration is available with Migration Toolkit for Containers (MTC) 1.4.0 or later. There are two parts of the Direct Migration: Direct Volume Migration Direct Image Migration Direct Migration enables the migration of persistent volumes and internal images directly from the source cluster to the destination cluster without an intermediary replication repository (object storage). 8.1. Prerequisites Expose the internal registries for both clusters (source and destination) involved in the migration for external traffic. Ensure the remote source and destination clusters can communicate using OpenShift Container Platform routes on port 443. Configure the exposed registry route in the source and destination MTC clusters; do this by specifying the spec.exposedRegistryPath field or from the MTC UI. Note If the destination cluster is the same as the host cluster (where a migration controller exists), there is no need to configure the exposed registry route for that particular MTC cluster. The spec.exposedRegistryPath is required only for Direct Image Migration and not Direct Volume Migration. Ensure the two spec flags in MigPlan custom resource (CR) indirectImageMigration and indirectVolumeMigration are set to false for Direct Migration to be performed. The default value for these flags is false . The Direct Migration feature of MTC uses the Rsync utility. 8.2. Rsync configuration for direct volume migration Direct Volume Migration (DVM) in MTC uses Rsync to synchronize files between the source and the target persistent volumes (PVs), using a direct connection between the two PVs. Rsync is a command-line tool that allows you to transfer files and directories to local and remote destinations. The rsync command used by DVM is optimized for clusters functioning as expected. The MigrationController CR exposes the following variables to configure rsync_options in Direct Volume Migration: Variable Type Default value Description rsync_opt_bwlimit int Not set When set to a positive integer, --bwlimit=<int> option is added to Rsync command. rsync_opt_archive bool true Sets the --archive option in the Rsync command. rsync_opt_partial bool true Sets the --partial option in the Rsync command. rsync_opt_delete bool true Sets the --delete option in the Rsync command. rsync_opt_hardlinks bool true Sets the --hard-links option is the Rsync command. rsync_opt_info string COPY2 DEL2 REMOVE2 SKIP2 FLIST2 PROGRESS2 STATS2 Enables detailed logging in Rsync Pod. rsync_opt_extras string Empty Reserved for any other arbitrary options. Setting the options set through the variables above are global for all migrations. The configuration will take effect for all future migrations as soon as the Operator successfully reconciles the MigrationController CR. Any ongoing migration can use the updated settings depending on which step it currently is in. Therefore, it is recommended that the settings be applied before running a migration. The users can always update the settings as needed. Use the rsync_opt_extras variable with caution. Any options passed using this variable are appended to the rsync command, with addition. Ensure you add white spaces when specifying more than one option. Any error in specifying options can lead to a failed migration. However, you can update MigrationController CR as many times as you require for future migrations. Customizing the rsync_opt_info flag can adversely affect the progress reporting capabilities in MTC. However, removing progress reporting can have a performance advantage. This option should only be used when the performance of Rsync operation is observed to be unacceptable. Note The default configuration used by DVM is tested in various environments. It is acceptable for most production use cases provided the clusters are healthy and performing well. These configuration variables should be used in case the default settings do not work and the Rsync operation fails. 8.2.1. Resource limit configurations for Rsync pods The MigrationController CR exposes following variables to configure resource usage requirements and limits on Rsync: Variable Type Default Description source_rsync_pod_cpu_limits string 1 Source rsync pod's CPU limit source_rsync_pod_memory_limits string 1Gi Source rsync pod's memory limit source_rsync_pod_cpu_requests string 400m Source rsync pod's cpu requests source_rsync_pod_memory_requests string 1Gi Source rsync pod's memory requests target_rsync_pod_cpu_limits string 1 Target rsync pod's cpu limit target_rsync_pod_cpu_requests string 400m Target rsync pod's cpu requests target_rsync_pod_memory_limits string 1Gi Target rsync pod's memory limit target_rsync_pod_memory_requests string 1Gi Target rsync pod's memory requests 8.2.1.1. Supplemental group configuration for Rsync pods If Persistent Volume Claims (PVC) are using a shared storage, the access to storage can be configured by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Variable Type Default Description src_supplemental_groups string Not Set Comma separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not Set Comma separated list of supplemental groups for target Rsync Pods For example, the MigrationController CR can be updated to set the values: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 8.2.1.2. Rsync retry configuration With Migration Toolkit for Containers (MTC) 1.4.3 and later, a new ability of retrying a failed Rsync operation is introduced. By default, the migration controller retries Rsync until all of the data is successfully transferred from the source to the target volume or a specified number of retries is met. The default retry limit is set to 20 . For larger volumes, a limit of 20 retries may not be sufficient. You can increase the retry limit by using the following variable in the MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40 In this example, the retry limit is increased to 40 . 8.2.1.3. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 8.2.1.3.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 8.2.1.3.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 8.2.1.3.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 8.2.2. MigCluster Configuration For every MigCluster resource created in Migration Toolkit for Containers (MTC), a ConfigMap named migration-cluster-config is created in the Migration Operator's namespace on the cluster which MigCluster resource represents. The migration-cluster-config allows you to configure MigCluster specific values. The Migration Operator manages the migration-cluster-config . You can configure every value in the ConfigMap using the variables exposed in the MigrationController CR: Variable Type Required Description migration_stage_image_fqin string No Image to use for Stage Pods (applicable only to IndirectVolumeMigration) migration_registry_image_fqin string No Image to use for Migration Registry rsync_endpoint_type string No Type of endpoint for data transfer ( Route , ClusterIP , NodePort ) rsync_transfer_image_fqin string No Image to use for Rsync Pods (applicable only to DirectVolumeMigration) migration_rsync_privileged bool No Whether to run Rsync Pods as privileged or not migration_rsync_super_privileged bool No Whether to run Rsync Pods as super privileged containers ( spc_t SELinux context) or not cluster_subdomain string No Cluster's subdomain migration_registry_readiness_timeout int No Readiness timeout (in seconds) for Migration Registry Deployment migration_registry_liveness_timeout int No Liveness timeout (in seconds) for Migration Registry Deployment exposed_registry_validation_path string No Subpath to validate exposed registry in a MigCluster (for example /v2) 8.3. Direct migration known issues 8.3.1. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 8.3.1.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 8.3.1.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false .
[ "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/mtc-direct-migration-requirements
Chapter 3. OpenShift Data Foundation operators
Chapter 3. OpenShift Data Foundation operators Red Hat OpenShift Data Foundation is comprised of the following three Operator Lifecycle Manager (OLM) operator bundles, deploying four operators which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated: OpenShift Data Foundation odf-operator OpenShift Container Storage ocs-operator rook-ceph-operator Multicloud Object Gateway mcg-operator Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state or approaching that state, with minimal administrator intervention. 3.1. OpenShift Data Foundation operator The odf-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators. The odf-operator has the following primary functions: Enforces the configuration and versioning of the other operators that comprise OpenShift Data Foundation. It does this by using two primary mechanisms: operator dependencies and Subscription management. The odf-operator bundle specifies dependencies on other OLM operators to make sure they are always installed at specific versions. The operator itself manages the Subscriptions for all other operators to make sure the desired versions of those operators are available for installation by the OLM. Provides the OpenShift Data Foundation external plugin for the OpenShift Console. Provides an API to integrate storage solutions with the OpenShift Console. 3.1.1. Components The odf-operator has a dependency on the ocs-operator package. It also manages the Subscription of the mcg-operator . In addition, the odf-operator bundle defines a second Deployment for the OpenShift Data Foundation external plugin for the OpenShift Console. This defines an nginx -based Pod that serves the necessary files to register and integrate OpenShift Data Foundation dashboards directly into the OpenShift Container Platform Console. 3.1.2. Design diagram This diagram illustrates how odf-operator is integrated with the OpenShift Container Platform. Figure 3.1. OpenShift Data Foundation Operator 3.1.3. Responsibilites The odf-operator defines the following CRD: StorageSystem The StorageSystem CRD represents an underlying storage system that provides data storage and services for OpenShift Container Platform. It triggers the operator to ensure the existence of a Subscription for a given Kind of storage system. 3.1.4. Resources The ocs-operator creates the following CRs in response to the spec of a given StorageSystem. Operator Lifecycle Manager Resources Creates a Subscription for the operator which defines and reconciles the given StorageSystem's Kind. 3.1.5. Limitation The odf-operator does not provide any data storage or services itself. It exists as an integration and management layer for other storage systems. 3.1.6. High availability High availability is not a primary requirement for the odf-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.1.7. Relevant config files The odf-operator comes with a ConfigMap of variables that can be used to modify the behavior of the operator. 3.1.8. Relevant log files To get an understanding of the OpenShift Data Foundation and troubleshoot issues, you can look at the following: Operator Pod logs StorageSystem status Underlying storage system CRD statuses Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageSystem status and events The StorageSystem CR stores the reconciliation details in the status of the CR and has associated events. The spec of the StorageSystem contains the name, namespace, and Kind of the actual storage system's CRD, which the administrator can use to find further information on the status of the storage system. 3.1.9. Lifecycle The odf-operator is required to be present as long as the OpenShift Data Foundation bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Data Foundation CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. The creation and deletion of StorageSystems is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate application programming interface (API) calls. 3.2. OpenShift Container Storage operator The ocs-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators and serves as a configuration gateway for the features provided by the other operators. It does not directly manage the other operators. The ocs-operator has the following primary functions: Creates Custom Resources (CRs) that trigger the other operators to reconcile against them. Abstracts the Ceph and Multicloud Object Gateway configurations and limits them to known best practices that are validated and supported by Red Hat. Creates and reconciles the resources required to deploy containerized Ceph and NooBaa according to the support policies. 3.2.1. Components The ocs-operator does not have any dependent components. However, the operator has a dependency on the existence of all the custom resource definitions (CRDs) from other operators, which are defined in the ClusterServiceVersion (CSV). 3.2.2. Design diagram This diagram illustrates how OpenShift Container Storage is integrated with the OpenShift Container Platform. Figure 3.2. OpenShift Container Storage Operator 3.2.3. Responsibilities The two ocs-operator CRDs are: OCSInitialization StorageCluster OCSInitialization is a singleton CRD used for encapsulating operations that apply at the operator level. The operator takes care of ensuring that one instance always exists. The CR triggers the following: Performs initialization tasks required for OpenShift Container Storage. If needed, these tasks can be triggered to run again by deleting the OCSInitialization CRD. Ensures that the required Security Context Constraints (SCCs) for OpenShift Container Storage are present. Manages the deployment of the Ceph toolbox Pod, used for performing advanced troubleshooting and recovery operations. The StorageCluster CRD represents the system that provides the full functionality of OpenShift Container Storage. It triggers the operator to ensure the generation and reconciliation of Rook-Ceph and NooBaa CRDs. The ocs-operator algorithmically generates the CephCluster and NooBaa CRDs based on the configuration in the StorageCluster spec. The operator also creates additional CRs, such as CephBlockPools , Routes , and so on. These resources are required for enabling different features of OpenShift Container Storage. Currently, only one StorageCluster CR per OpenShift Container Platform cluster is supported. 3.2.4. Resources The ocs-operator creates the following CRs in response to the spec of the CRDs it defines . The configuration of some of these resources can be overridden, allowing for changes to the generated spec or not creating them altogether. General resources Events Creates various events when required in response to reconciliation. Persistent Volumes (PVs) PVs are not created directly by the operator. However, the operator keeps track of all the PVs created by the Ceph CSI drivers and ensures that the PVs have appropriate annotations for the supported features. Quickstarts Deploys various Quickstart CRs for the OpenShift Container Platform Console. Rook-Ceph resources CephBlockPool Define the default Ceph block pools. CephFilesysPrometheusRulesoute for the Ceph object store. StorageClass Define the default Storage classes. For example, for CephBlockPool and CephFilesystem ). VolumeSnapshotClass Define the default volume snapshot classes for the corresponding storage classes. Multicloud Object Gateway resources NooBaa Define the default Multicloud Object Gateway system. Monitoring resources Metrics Exporter Service Metrics Exporter Service Monitor PrometheusRules 3.2.5. Limitation The ocs-operator neither deploys nor reconciles the other Pods of OpenShift Data Foundation. The ocs-operator CSV defines the top-level components such as operator Deployments and the Operator Lifecycle Manager (OLM) reconciles the specified component. 3.2.6. High availability High availability is not a primary requirement for the ocs-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.2.7. Relevant config files The ocs-operator configuration is entirely specified by the CSV and is not modifiable without a custom build of the CSV. 3.2.8. Relevant log files To get an understanding of the OpenShift Container Storage and troubleshoot issues, you can look at the following: Operator Pod logs StorageCluster status and events OCSInitialization status Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageCluster status and events The StorageCluster CR stores the reconciliation details in the status of the CR and has associated events. Status contains a section of the expected container images. It shows the container images that it expects to be present in the pods from other operators and the images that it currently detects. This helps to determine whether the OpenShift Container Storage upgrade is complete. OCSInitialization status This status shows whether the initialization tasks are completed successfully. 3.2.9. Lifecycle The ocs-operator is required to be present as long as the OpenShift Container Storage bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Container Storage CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. An OCSInitialization CR should always exist. The operator creates one if it does not exist. The creation and deletion of StorageClusters is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate API calls. 3.3. Rook-Ceph operator Rook-Ceph operator is the Rook operator for Ceph in the OpenShift Data Foundation. Rook enables Ceph storage systems to run on the OpenShift Container Platform. The Rook-Ceph operator is a simple container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy. 3.3.1. Components The Rook-Ceph operator manages a number of components as part of the OpenShift Data Foundation deployment. Ceph-CSI Driver The operator creates and updates the CSI driver, including a provisioner for each of the two drivers, RADOS block device (RBD) and Ceph filesystem (CephFS) and a volume plugin daemonset for each of the two drivers. Ceph daemons Mons The monitors (mons) provide the core metadata store for Ceph. OSDs The object storage daemons (OSDs) store the data on underlying devices. Mgr The manager (mgr) collects metrics and provides other internal functions for Ceph. RGW The RADOS Gateway (RGW) provides the S3 endpoint to the object store. MDS The metadata server (MDS) provides CephFS shared volumes. 3.3.2. Design diagram The following image illustrates how Ceph Rook integrates with OpenShift Container Platform. Figure 3.3. Rook-Ceph Operator With Ceph running in the OpenShift Container Platform cluster, OpenShift Container Platform applications can mount block devices and filesystems managed by Rook-Ceph, or can use the S3/Swift API for object storage. 3.3.3. Responsibilities The Rook-Ceph operator is a container that bootstraps and monitors the storage cluster. It performs the following functions: Automates the configuration of storage components Starts, monitors, and manages the Ceph monitor pods and Ceph OSD daemons to provide the RADOS storage cluster Initializes the pods and other artifacts to run the services to manage: CRDs for pools Object stores (S3/Swift) Filesystems Monitors the Ceph mons and OSDs to ensure that the storage remains available and healthy Deploys and manages Ceph mons placement while adjusting the mon configuration based on cluster size Watches the desired state changes requested by the API service and applies the changes Initializes the Ceph-CSI drivers that are needed for consuming the storage Automatically configures the Ceph-CSI driver to mount the storage to pods Rook-Ceph Operator architecture The Rook-Ceph operator image includes all required tools to manage the cluster. There is no change to the data path. However, the operator does not expose all Ceph configurations. Many of the Ceph features like placement groups and crush maps are hidden from the users and are provided with a better user experience in terms of physical resources, pools, volumes, filesystems, and buckets. 3.3.4. Resources Rook-Ceph operator adds owner references to all the resources it creates in the openshift-storage namespace. When the cluster is uninstalled, the owner references ensure that the resources are all cleaned up. This includes OpenShift Container Platform resources such as configmaps , secrets , services , deployments , daemonsets , and so on. The Rook-Ceph operator watches CRs to configure the settings determined by OpenShift Data Foundation, which includes CephCluster , CephObjectStore , CephFilesystem , and CephBlockPool . 3.3.5. Lifecycle Rook-Ceph operator manages the lifecycle of the following pods in the Ceph cluster: Rook operator A single pod that owns the reconcile of the cluster. RBD CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . CephFS CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . Monitors (mons) Three mon pods, each with its own deployment. Stretch clusters Contain five mon pods, one in the arbiter zone and two in each of the other two data zones. Manager (mgr) There is a single mgr pod for the cluster. Stretch clusters There are two mgr pods (starting with OpenShift Data Foundation 4.8), one in each of the two non-arbiter zones. Object storage daemons (OSDs) At least three OSDs are created initially in the cluster. More OSDs are added when the cluster is expanded. Metadata server (MDS) The CephFS metadata server has a single pod. RADOS gateway (RGW) The Ceph RGW daemon has a single pod. 3.4. MCG operator The Multicloud Object Gateway (MCG) operator is an operator for OpenShift Data Foundation along with the OpenShift Data Foundation operator and the Rook-Ceph operator. The MCG operator is available upstream as a standalone operator. The MCG operator performs the following primary functions: Controls and reconciles the Multicloud Object Gateway (MCG) component within OpenShift Data Foundation. Manages new user resources such as object bucket claims, bucket classes, and backing stores. Creates the default out-of-the-box resources. A few configurations and information are passed to the MCG operator through the OpenShift Data Foundation operator. 3.4.1. Components The MCG operator does not have sub-components. However, it consists of a reconcile loop for the different resources that are controlled by it. The MCG operator has a command-line interface (CLI) and is available as a part of OpenShift Data Foundation. It enables the creation, deletion, and querying of various resources. This CLI adds a layer of input sanitation and status validation before the configurations are applied unlike applying a YAML file directly. 3.4.2. Responsibilities and resources The MCG operator reconciles and is responsible for the custom resource definitions (CRDs) and OpenShift Container Platform entities. Backing store Namespace store Bucket class Object bucket claims (OBCs) NooBaa, pod stateful sets CRD Prometheus Rules and Service Monitoring Horizontal pod autoscaler (HPA) Backing store A resource that the customer has connected to the MCG component. This resource provides MCG the ability to save the data of the provisioned buckets on top of it. A default backing store is created as part of the deployment depending on the platform that the OpenShift Container Platform is running on. For example, when OpenShift Container Platform or OpenShift Data Foundation is deployed on Amazon Web Services (AWS), it results in a default backing store which is an AWS::S3 bucket. Similarly, for Microsoft Azure, the default backing store is a blob container and so on. The default backing stores are created using CRDs for the cloud credential operator, which comes with OpenShift Container Platform. There is no limit on the amount of the backing stores that can be added to MCG. The backing stores are used in the bucket class CRD to define the different policies of the bucket. Refer the documentation of the specific OpenShift Data Foundation version to identify the types of services or resources supported as backing stores. Namespace store Resources that are used in namespace buckets. No default is created during deployment. Bucketclass A default or initial policy for a newly provisioned bucket. The following policies are set in a bucketclass: Placement policy Indicates the backing stores to be attached to the bucket and used to write the data of the bucket. This policy is used for data buckets and for cache policies to indicate the local cache placement. There are two modes of placement policy: Spread. Strips the data across the defined backing stores Mirror. Creates a full replica on each backing store Namespace policy A policy for the namespace buckets that defines the resources that are being used for aggregation and the resource used for the write target. Cache Policy This is a policy for the bucket and sets the hub (the source of truth) and the time to live (TTL) for the cache items. A default bucket class is created during deployment and it is set with a placement policy that uses the default backing store. There is no limit to the number of bucket class that can be added. Refer to the documentation of the specific OpenShift Data Foundation version to identify the types of policies that are supported. Object bucket claims (OBCs) CRDs that enable provisioning of S3 buckets. With MCG, OBCs receive an optional bucket class to note the initial configuration of the bucket. If a bucket class is not provided, the default bucket class is used. NooBaa, pod stateful sets CRD An internal CRD that controls the different pods of the NooBaa deployment such as the DB pod, the core pod, and the endpoints. This CRD must not be changed as it is internal. This operator reconciles the following entities: DB pod SCC Role Binding and Service Account to allow SSO single sign-on between OpenShift Container Platform and NooBaa user interfaces Route for S3 access Certificates that are taken and signed by the OpenShift Container Platform and are set on the S3 route Prometheus rules and service monitoring These CRDs set up scraping points for Prometheus and alert rules that are supported by MCG. Horizontal pod autoscaler (HPA) It is Integrated with the MCG endpoints. The endpoint pods scale up and down according to CPU pressure (amount of S3 traffic). 3.4.3. High availability As an operator, the only high availability provided is that the OpenShift Container Platform reschedules a failed pod. 3.4.4. Relevant log files To troubleshoot issues with the NooBaa operator, you can look at the following: Operator pod logs, which are also available through the must-gather. Different CRDs or entities and their statuses that are available through the must-gather. 3.4.5. Lifecycle The MCG operator runs and reconciles after OpenShift Data Foundation is deployed and until it is uninstalled.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators
Preface
Preface This comprehensive guide provides users with the knowledge and tools needed to make the most of our robust and feature-rich container registry service, Quay.io.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/pr01
Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL
Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL If you have multiple versions of Red Hat build of OpenJDK installed on RHEL, you can interactively select the default Red Hat build of OpenJDK version to use system-wide. Note If you do not have root privileges, you can select a Red Hat build of OpenJDK version by configuring the JAVA_HOME environment variable. Prerequisites You must have root privileges on the system. Multiple versions of Red Hat build of OpenJDK were installed using the yum package manager. Procedure View the Red Hat build of OpenJDK versions installed on the system. USD yum list installed "java*" A list of installed Java packages appears. Display the Red Hat build of OpenJDK versions that can be used for a specific java command and select the one to use: The current system-wide Red Hat build of OpenJDK version is marked with an asterisk. The current Red Hat build of OpenJDK version for the specified java command is marked with a plus sign. Press Enter to keep the current selection or enter the Selection number of the Red Hat build of OpenJDK version you want to select followed by the Enter key. The default Red Hat build of OpenJDK version for the system is the selected version. Verify that the chosen binary is selected. Note This procedure configures the java command. Then javac command can be set up in a similar way, but it operates independently. If you have Red Hat build of OpenJDK installed, alternatives provides more possible selections. In particular, the javac master alternative switches many binaries provided by the -devel sub-package. Even if you have Red Hat build of OpenJDK installed, java (and other JRE masters) and javac (and other Red Hat build of OpenJDK masters) still operate separately, so you can have different selections for JRE and JDK. The alternatives --config java command affects the jre and its associated slaves. If you want to change Red Hat build of OpenJDK, use the javac alternatives command. The --config javac utility configures the SDK and related slaves. To see all possible masters, use alternatives --list and check all of the java , javac , jre , and sdk masters.
[ "Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.242.b08-0.el8_1 @rhel-8-appstream-rpms java-1.8.0-openjdk-headless.x86_64 1:1.8.0.242.b08-0.el8_1 @rhel-8-appstream-rpms java-11-openjdk.x86_64 1:11.0.9.10-0.el8_1 @rhel-8-appstream-rpms java-11-openjdk-headless.x86_64 1:11.0.9.10-0.el8_1 @rhel-8-appstream-rpms javapackages-filesystem.noarch 5.3.0-1.module+el8+2447+6f56d9a6 @rhel-8-appstream-rpms", "sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- * 1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.232.b09-0.el8_0.x86_64/jre/bin/java) + 2 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.9.10-0.el8_0.x86_64/bin/java) Enter to keep the current selection[+], or type selection number: 1", "java -version openjdk version \"11.0.9\" 2020-10-15 LTS OpenJDK Runtime Environment 18.9 (build 11.0.9+10-LTS) OpenJDK 64-Bit Server VM 18.9 (build 11.0.9+10-LTS, mixed mode, sharing)" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/configuring_red_hat_build_of_openjdk_11_on_rhel/interactively-selecting-systemwide-openjdk-version-on-rhel8
probe::signal.flush
probe::signal.flush Name probe::signal.flush - Flushing all pending signals for a task Synopsis signal.flush Values task The task handler of the process performing the flush pid_name The name of the process associated with the task performing the flush name Name of the probe point sig_pid The PID of the process associated with the task performing the flush
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-flush
Chapter 7. Supported integration products
Chapter 7. Supported integration products AMQ Streams 1.8 supports integration with the following Red Hat products. Red Hat Single Sign-On 7.4 and later Provides OAuth 2.0 authentication and OAuth 2.0 authorization. For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the AMQ Streams 1.8 documentation. Additional resources Red Hat Single Sign-On Supported Configurations
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_rhel/supported-config-str
Red Hat Quay architecture
Red Hat Quay architecture Red Hat Quay 3.12 Red Hat Quay Architecture Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_architecture/index
Chapter 20. Debugging a Running Application
Chapter 20. Debugging a Running Application This chapter will introduce the techniques for debugging an application which can be executed as many times as needed, on a machine directly accessible to the developer. 20.1. Enabling Debugging with Debugging Information To debug applications and libraries, debugging information is required. The following sections describe how to obtain this information. 20.1.1. Debugging Information While debugging any executable code, two kinds of information allow the tools and by extension the programmer to comprehend the binary code: The source code text A description of how the source code text relates to the binary code This is referred to as debugging information. Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo files. Within these ELF files, the DWARF format is used to hold the debug information. DWARF symbols are read by the readelf -w file command. Caution STABS is occasionally used with UNIX. STABS is an older, less capable format. Its use is discouraged by Red Hat. GCC and GDB support the STABS production and consumption on a best effort basis only. Some other tools such as Valgrind and elfutils do not support STABS at all. Additional Resources The DWARF Debugging Standard 20.1.2. Enabling Debugging of C and C++ Applications with GCC Because debugging information is large, it is not included in executable files by default. To enable debugging of your C and C++ applications with it, you must explicitly instruct the compiler to create debugging information. Enabling the Creation of Debugging Information with GCC To enable the creation of debugging information with GCC when compiling and linking code, use the -g option: Optimizations performed by the compiler and linker can result in executable code which is hard to relate to the original source code: variables may be optimized out, loops unrolled, operations merged into the surrounding ones etc. This affects debugging negatively. For an improved debugging experience, consider setting the optimization with the -Og option. However, changing the optimization level changes the executable code and may change the actual behaviour so as to remove some bugs. The -fcompare-debug GCC option tests code compiled by GCC with debug information and without debug information. The test passes if the resulting two binary files are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug option significantly increases compilation time. See the GCC manual page for details about this option. Additional Resources Section 20.1, "Enabling Debugging with Debugging Information" Using the GNU Compiler Collection (GCC) - 3.10 Options for Debugging Your Program Debugging with GDB - 18.3 Debugging Information in Separate Files The GCC manual page: 20.1.3. Debuginfo Packages Debuginfo packages contain debugging information and debug source code for programs and libraries. Prerequisites Understanding of debugging information Debuginfo Packages For applications and libraries installed in packages from the Red Hat Enterprise Linux repositories, you can obtain the debugging information and debug source code as separate debuginfo packages available through another channel. The debuginfo packages contain .debug files, which contain DWARF debuginfo and the source files used for compiling the binary packages. Debuginfo package contents are installed to the /usr/lib/debug directory. A debuginfo package provides debugging information valid only for a binary package with the same name, version, release, and architecture: Binary package: packagename - version - release . architecture .rpm Debuginfo package: packagename -debuginfo- version - release . architecture .rpm 20.1.4. Getting debuginfo Packages for an Application or Library using GDB The GNU Debugger (GDB) automatically recognizes missing debug information and resolves the package name. Prerequisites The application or library you want to debug is installed on the system GDB is installed on the system The debuginfo-install tool is installed on the system Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB without proceeding further: type q and Enter . Run the command suggested by GDB to install the needed debuginfo packages: Installing a debuginfo package for an application or library installs debuginfo packages for all dependencies, too. In case GDB is not able to suggest the debuginfo package, follow the procedure in Section 20.1.5, "Getting debuginfo Packages for an Application or Library Manually" . Additional Resources Red Hat Developer Toolset User Guide - Installing Debugging Information Red Hat Knowledgebase solution - How can I download or install debuginfo packages for RHEL systems? 20.1.5. Getting debuginfo Packages for an Application or Library Manually To manually choose (which) debuginfo packages (to install) for installation, locate the executable file and find the package which installs it. Note The use of GDB to determine the packages for installation is preferable. Use this manual procedure only if GDB is not able to suggest the package to install. Prerequisites The application or library must be installed on the system The debuginfo -install tool must be available on the system Procedure Find the executable file of the application or library. Use the which command to find the application file. Use the locate command to find the library file. If the original reasons for debugging included error messages, pick the result where the library has the same additional numbers in its file name. If in doubt, try following the rest of the procedure with the result where the library file name includes no additional numbers. Note The locate command is provided by the mlocate package. To install it and enable its use: Using the file path, search for a package which provides that file. The output provides a list of packages in the format name - version . distribution . architecture . In this step, only the package name is important, because the version shown in yum output may not be the actual installed version. Important If this step does not produce any results, it is not possible to determine which package provided the binary file and this procedure fails. Use the rpm low-level package management tool to find what package version is installed on the system. Use the package name as an argument: The output provides details for the installed package in the format name - version . distribution . architecture . Install the debuginfo packages using the debuginfo-install utility. In the command, use the package name and other details you determined during the step: Installing a debuginfo package for an application or library installs debuginfo packages for all dependencies, too. Additional Resources Red Hat Developer Toolset User Guide - Installing Debugging Information Knowledgebase article - How can I download or install debuginfo packages for RHEL systems? 20.2. Inspecting the Application's Internal State with GDB To find why an application does not work properly, control its execution and examine its internal state with a debugger. This section describes how to use the GNU Debugger (GDB) for this task. 20.2.1. GNU Debugger (GDB) A debugger is a tool that enables control of code execution and inspection of the state of the code. This capability is used to investigate what is happening in a program and why. Red Hat Enterprise Linux contains the GNU debugger (GDB) which offers this functionality through a command line user interface. For a graphical frontend to GDB, install the Eclipse integrated development environment. See Using Eclipse . GDB Capabilities A single GDB session can debug: multithreaded and forking programs multiple programs at once programs on remote machines or in containers with the gdbserver utility connected over a TCP/IP network connection Debugging Requirements To debug any executable code, GDB requires the respective debugging information: For programs developed by you, you can create the debugging information while building the code. For system programs installed from packages, their respective debuginfo packages must be installed. 20.2.2. Attaching GDB to a Process In order to examine a process, GDB must be attached to the process. Prerequisites GDB must be installed on the system Starting a Program with GDB When the program is not running as a process, start it with GDB: Replace program with a file name or path to the program. GDB starts execution of the program. You can set up breakpoints and the gdb environment before beginning the execution of the process with the run command. Attaching GDB to an Already Running Process To attach GDB to a program already running as a process: Find the process id ( pid ) with the ps command: Replace program with a file name or path to the program. Attach GDB to this process: Replace program with a file name or path to the program, replace pid with an actual process id number from the ps output. Attaching an Already Running GDB to an Already Running Process To attach an already running GDB to an already running program: Use the shell GDB command to run the ps command and find the program's process id ( pid ): Replace program with a file name or path to the program. Use the attach command to attach GDB to the program: Replace pid by an actual process id number from the ps output. Note In some cases, GDB might not be able to find the respective executable file. Use the file command to specify the path: Additional Resources Debugging with GDB - 2.1 Invoking GDB Debugging with GDB - 4.7 Debugging an Already-running Process 20.2.3. Stepping through Program Code with GDB Once the GDB debugger is attached to a program, you can use a number of commands to control the execution of the program. Prerequisites GDB must be installed on the system You must have the required debugging information available: The program is compiled and built with debugging information, or The relevant debuginfo packages are installed GDB is attached to the program that is to be debugged GDB Commands to Step Through the Code r (run) Start the execution of the program. If run is executed with arguments, those arguments are passed on to the executable as if the program was started normally. Users normally issue this command after setting breakpoints. start Start the execution of the program and stop at the beginning of the main function. If start is executed with arguments, those arguments are passed on to the executable as if the program was started normally. c (continue) Continue the execution of the program from the current state. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates n () Another commonly known name of this command is step over . Continue the execution of the program from the current state, until the line of code in the current source file is reached. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates s (step) Another commonly known name of this command is step into . The step command halts execution at each sequential line of code in the current source file. However, if the execution is currently stopped at a source line containing a function call , GDB stops the execution after entering the function call (rather than executing it). until location Continue the execution until the code location specified by the location option is reached. fini (finish) Resume the execution of the program and halt when the execution returns from a function. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates q (quit) Terminate the execution and exit GDB. Additional Resources Section 20.2.5, "Using GDB Breakpoints to Stop Execution at Defined Code Locations" Debugging with GDB - 4.2 Starting your Program Debugging with GDB - 5.2 Continuing and Stepping 20.2.4. Showing Program Internal Values with GDB Displaying the values of the internal variables of a program is important for understanding what the program is doing. GDB offers multiple commands that you can use to inspect the internal variables. This section describes the most useful of these commands. Prerequisites Understanding of the GDB debugger GDB Commands to Display the Internal State of a Program p (print) Displays the value of the argument given. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested. It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data structures (such as classes, structs) using the print command. bt (backtrace) Display the chain of function calls used to reach the current execution point, or the chain of functions used up until execution was signalled. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes. Adding the full option to the backtrace command displays local variables, too. It is possible to extend GDB with frame filter Python scripts for customized display of data displayed using the bt and info frame commands. The term frame refers to the data associated with a single function call. info The info command is a generic command to provide information about various items. It takes an option specifying the item. The info args command displays arguments of the function call that is the currently selected frame. The info locals command displays local variables in the currently selected frame. For a list of the possible items, run the command help info in a GDB session: l (list) Show the line in the source code where the program stopped. This command is available only when the program execution is stopped. While not strictly a command to show the internal state, list helps the user understand what changes to the internal state will happen in the step of the program's execution. Additional Resources Red Hat Developer blog entry - The GDB Python API Debugging with GDB - 10.9 Pretty Printing 20.2.5. Using GDB Breakpoints to Stop Execution at Defined Code Locations In many cases, it is advantageous to let the program execute until a certain line of code is reached. Prerequisites Understanding of GDB Using Breakpoints in GDB Breakpoints are markers that tell GDB to stop the execution of a program. Breakpoints are most commonly associated with source code lines: Placing a breakpoint requires specifying the source file and line number. To place a breakpoint : Specify the name of the source code file and the line in that file: When file is not present, the name of the source file at the current point of execution is used: Alternatively, use a function name to place the breakpoint: A program might encounter an error after a certain number of iterations of a task. To specify an additional condition to halt execution: Replace condition with a condition in the C or C++ language. The meaning of file and line is the same as above. To inspect the status of all breakpoints and watchpoints: To remove a breakpoint by using its number as displayed in the output of info br : To remove a breakpoint at a given location: 20.2.6. Using GDB Watchpoints to Stop Execution on Data Access and Changes In many cases, it is advantageous to let the program execute until certain data changes or is accessed. This section lists the most common watchpoints. Prerequisites Understanding of GDB Using Watchpoints in GDB Watchpoints are markers which tell GDB to stop the execution of a program. Watchpoints are associated with data: Placing a watchpoint requires specifying an expression describing a variable, multiple variables, or a memory address. To place a watchpoint for data change (write): Replace expression with an expression that describes what you want to watch. For variables, expression is equal to the name of the variable. To place a watchpoint for data access (read): To place a watchpoint for any data access (both read and write): To inspect the status of all watchpoints and breakpoints: To remove a watchpoint: Replace the num option with the number reported by the info br command. 20.2.7. Debugging Forking or Threaded Programs with GDB Some programs use forking or threads to achieve parallel code execution. Debugging multiple simultaneous execution paths requires special considerations. Prerequisites Understanding of the GDB debugger Understanding of the concepts of process forking and threads Debugging Forked Programs with GDB Forking is a situation when a program ( parent ) creates an independent copy of itself ( child ). Use the following settings and commands to affect the reaction of GDB to an occuring fork: The follow-fork-mode setting controls whether GDB follows the parent or the child after the fork. set follow-fork-mode parent After forking, debug the parent process. This is the default. set follow-fork-mode child After forking, debug the child process. show follow-fork-mode Displays the current setting of the follow-fork-mode . The set detach-on-fork setting controls whether the GDB keeps control of the other (not followed) process or leaves it to run. set detach-on-fork on The process which is not followed (depending on the value of the follow-fork-mode ) is detached and runs independently. This is the default. set detach-on-fork off GDB keeps control of both processes. The process which is followed (depending on the value of follow-fork-mode ) is debugged as usual, while the other is suspended. show detach-on-fork Displays the current setting of detach-on-fork . Debugging Threaded Programs with GDB GDB has the ability to debug individual threads, and to manipulate and examine them independently. To make GDB stop only the thread that is examined, use the commands set non-stop on and set target-async on . You can add these commands to the .gdbinit file. After that functionality is turned on, GDB is ready to conduct thread debugging. GDB uses the concept of current thread . By default, commands apply to the current thread only. info threads Displays a list of threads with their id and gid numbers, indicating the current thread. thread id Sets the thread with the specified id as the current thread. thread apply ids command Applies the command command to all threads listed by ids . The ids option is a space-separated list of thread ids. The special value all applies the command to all threads. break location thread id if condition Sets a breakpoint at a certain location with a certain condition only for the thread number id . watch expression thread id Sets a watchpoint defined by expression only for the thread number id . command& Executes the command command and returns immediately to the GDB prompt (gdb) , continuing code execution in the background. interrupt Halts execution in the background. Additional Resources Debugging with GDB - 4.10 Debugging Programs with Multiple Threads Debugging with GDB - 4.11 Debugging Forks 20.3. Recording Application Interactions The executable code of applications interacts with the code of the operating system and shared libraries. Recording an activity log of these interactions can provide enough insight into the application's behavior without debugging the actual application code. Alternatively, analyzing an application's interactions can help to pinpoint the conditions in which a bug manifests. 20.3.1. Useful Tools for Recording Application Interactions Red Hat Enterprise Linux offers multiple tools for analyzing an application's interactions. strace The strace tool enables the tracing of (and tampering with) interactions between an application and the Linux kernel: system calls, signal deliveries, and changes of process state. The strace output is detailed and explains the calls well, because strace interprets parameters and results with knowledge of the underlying kernel code. Numbers are turned into the respective constant names, bitwise combined flags expanded to flag lists, pointers to character arrays dereferenced to provide the actual string, and more. Support for more recent kernel features may be lacking, however. The use of strace does not require any particular setup except for setting up the log filter. Tracing the application code with strace may result in significant slowdown of the application's execution. As a result, strace is not suitable for many production deployments. As an alternative, consider using SystemTap in such cases. You can limit the list of traced system calls and signals to reduce the amount of captured data. strace captures only kernel-userspace interactions and does not trace library calls, for example. Consider using ltrace for tracing library calls. ltrace The ltrace tool enables logging of an application's user space calls into shared objects (dynamic libraries). ltrace enables tracing calls to any library. You can filter the traced calls to reduce the amount of captured data. The use of ltrace does not require any particular setup except for setting up the log filter. ltrace is lightweight and fast, offering an alternative to strace : it is possible to trace the respective interfaces in libraries such as glibc with ltrace instead of tracing kernel functions with strace . Note however that ltrace may be less precise at syscall tracing. ltrace is able to decode parameters only for a limited set of library calls: the calls whose prototypes are defined in the relevant configuration files. As part of the ltrace package, prototypes for some libacl , libc , and libm calls and system calls are provided. The ltrace output mostly contains only raw numbers and pointers. The interpretation of ltrace output usually requires consulting the actual interface declarations of the libraries present in the output. SystemTap SystemTap is an instrumentation platform for probing running processes and kernel activity on the Linux system. SystemTap uses its own scripting language for programming custom event handlers. Compared to using strace and ltrace , scripting the logging means more work in the initial setup phase. However, the scripting capabilities extend SystemTap's usefulness beyond just producing logs. SystemTap works by creating and inserting a kernel module. The use of SystemTap is efficient and does not create a significant slowdown of the system or application execution on its own. SystemTap comes with a set of usage examples. GDB The GNU Debugger is primarily meant for debugging, not logging. However, some of its features make it useful even in the scenario where an application's interaction is the primary activity of interest. With GDB, it is possible to conveniently combine the capture of an interaction event with immediate debugging of the subsequent execution path. GDB is best suited for analyzing responses to infrequent or singular events, after the initial identification of problematic situations by other tools. Using GDB in any scenario with frequent events becomes inefficient or even impossible. Additional Resources Red Hat Enterprise Linux SystemTap Beginners Guide Red Hat Developer Toolset User Guide 20.3.2. Monitoring an Application's System Calls with strace The strace tool enables tracing of (and optional tampering with) interactions between an application and the Linux kernel: system calls, signal deliveries, and changes of process state. Prerequisites strace is installed on the system To install strace , run as root: Procedure Note that the tracing specification syntax of strace offers regular expressions and syscall classes to help with the identification of system calls. Run or attach to the process you wish to monitor. If the program you want to monitor is not running, start strace and specify the program : Options used in the example above are not mandatory. Use when needed: The -f option is an acronym for "follow forks". This option traces children created by the fork, vfork, and clone system calls. The -v or -e abbrev=none option disables abbreviation of output, omitting various structure fields. The -tt option is a variant of the -t option that prefixes each line with an absolute timestamp. With the -tt option, the time printed includes microseconds. The -T option prints the amount of time spent in each system call at the end of the line. The -yy option is a variant of the -y option that enables printing of paths associated with file descriptor numbers. The -yy option prints not only paths, but also protocol-specific information associated with socket file descriptors and block or character device number associated with device file descriptors. The -s option controls the maximum string size to be printed. Note that filenames are not considered strings and are always printed in full. -e trace controls the set of system calls to trace. Replace call with a comma-separated list of system calls to be displayed. If call is left, strace will display all system calls. Shorthands for some groups of system calls are provided in the strace(1) manual page. If the program is already running, find its process ID ( pid ) and attach strace to it: If you do not wish to trace any forked processes or threads, do not use the -f option. strace displays the system calls made by the application and their details. In most cases, an application and its libraries make a large number of calls and strace output appears immediately, if no filter for system calls is set. strace exits when all the traced processes exit. To terminate the monitoring before the traced program exits, press Ctrl+C . If strace started the program, it will send the terminating signal (SIGINT, in this case) to the program being started. Note, however, that program, in turn, may ignore that signal. If you attached strace to an already running program, the program terminates together with strace . Analyze the list of system calls done by the application. Problems with resource access or availability are present in the log as calls returning errors. Values passed to the system calls and patterns of call sequences provide insight into the causes of the application's behaviour. If the application crashes, the important information is probably at the end of the log. The output contains a lot of extra information. However, you can construct a more precise filter and repeat the procedure. Notes It is advantageous to both see the output and save it to a file. To do this, run the tee command: To see separate output that corresponds to different processes, run: Output for a process with the process ID ( pid ) will be stored in your_log_file.pid . Additional Resources The strace(1) manual page. Knowledgebase article - How do I use strace to trace system calls made by a command? Red Hat Developer Toolset User Guide - strace 20.3.3. Monitoring the Application's Library Function Calls with ltrace The ltrace tool enables monitoring of the calls done by an application to functions available in libraries (shared objects). Prerequisites ltrace is installed on the system Procedure Identify the libraries and functions of interest, if possible. If the program you want to monitor is not running, start ltrace and specify program : Use the options -e and -l to filter the output: Supply the function names to be displayed as function . The -e function option can be used multiple times. If left out, ltrace will display calls to all functions. Instead of specifying functions, you can specify whole libraries with the -l library option. This option behaves similarly to the -e function option. See the ltrace (1)_ manual page for more information. If the program is already running, find its process id ( pid ) and attach ltrace to it: If you do not wish to trace any forked processes or threads, leave out the -f option. ltrace displays the library calls made by the application. In most cases, an application will make a large number of calls and ltrace output will appear immediately if no filter is set. ltrace exits when the program exits. To terminate the monitoring before the traced program exits, press ctrl+C . If ltrace started the program, the program terminates together with ltrace . If you attached ltrace to an already running program, the program terminates together with ltrace . Analyze the list of library calls done by the application. If the application crashes, the important information is probably at the end of the log. The output contains a lot of unnecessary information. However, you can construct a more precise filter and repeat the procedure. Note It is advantageous to both see the output and save it to a file. Use the tee command to achieve this: Additional Resources The strace(1) manual page Red Hat Developer Toolset User Guide - ltrace 20.3.4. Monitoring the Application's System Calls with SystemTap The SystemTap tool enables registering custom event handlers for kernel events. In comparison with strace , it is harder to use, but SystemTap is more efficient and enables more complicated processing logic. Prerequisites SystemTap is installed on the system Procedure Create a file my_script.stp with the contents: Find the process ID ( pid ) of the process you wish to monitor: Run SystemTap with the script: The value of pid is the process id. The script is compiled to a kernel module which is then loaded. This introduces a slight delay between entering the command and getting the output. When the process performs a system call, the call name and its parameters are printed to the terminal. The script exits when the process terminates, or when you press Ctrl+C . Additional Resources SystemTap Beginners Guide SystemTap Tapset Reference A larger SystemTap script, which approximates strace functionality, is available as /usr/share/systemtap/examples/process/strace.stp . To run the script: # stap --example strace.stp -x pid or # stap --example strace.stp -c "cmd args ... " 20.3.5. Using GDB to Intercept Application System Calls GDB enables stopping the execution in various kinds of situations arising during the execution of a program. To stop the execution when the program performs a system call, use a GDB catchpoint . Prerequisites Understanding of GDB breakpoints GDB is attached to the program Stopping Program Execution on System Calls with GDB Set the catchpoint: The command catch syscall sets a special type of breakpoint that halts execution when a system call is performed by the program. The syscall-name option specifies the name of the call. You can specify multiple catchpoints for various system calls. Leaving out the syscall-name option causes GDB to stop on any system call. If the program has not started execution, start it: If the program execution is only halted, resume it: GDB halts the execution after any specified system call is performed by the program. Additional Resources Section 20.2.4, "Showing Program Internal Values with GDB" Section 20.2.3, "Stepping through Program Code with GDB" Debugging with GDB - 5.1.3 Setting Catchpoints 20.3.6. Using GDB to Intercept the Handling of Signals by Applications GDB enables stopping the execution in various kinds of situations arising during the execution of a program. To stop the execution when the program receives a signal from the operating system, use a GDB catchpoint . Prerequisites Understanding of GDB breakpoints GDB is attached to the program Stopping the Program Execution on Receiving a Signal with GDB Set the catchpoint: The command catch signal sets a special type of breakpoint that halts execution when a signal is received by the program. The signal-type option specifies the type of the signal. Use the special value 'all' to catch all the signals. If the program has not started execution, start it: If the program execution is only halted, resume it: GDB halts the execution after the program receives any specified signal. Additional Resources Section 20.2.4, "Showing Program Internal Values with GDB" Section 20.2.3, "Stepping through Program Code with GDB" Debugging With GDB - 5.3.1 Setting Catchpoints
[ "gcc ... -g", "man gcc", "gdb -q /bin/ls Reading symbols from /usr/bin/ls...Reading symbols from /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: debuginfo-install coreutils-8.22-21.el7.x86_64 (gdb)", "(gdb) q", "debuginfo-install coreutils-8.22-21.el7.x86_64", "which nautilus /usr/bin/nautilus", "locate libz | grep so /usr/lib64/libz.so /usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.7", "yum install mlocate updatedb", "yum provides /usr/lib64/libz.so.1.2.7 Loaded plugins: product-id, search-disabled-repos, subscription-manager zlib-1.2.7-17.el7.x86_64 : The compression and decompression library Repo : @anaconda/7.4 Matched from: Filename : /usr/lib64/libz.so.1.2.7", "rpm -q zlib zlib-1.2.7-17.el7.x86_64", "debuginfo-install zlib-1.2.7-17.el7.x86_64", "gdb program", "ps -C program -o pid h pid", "gdb program -p pid", "(gdb) shell ps -C program -o pid h pid", "(gdb) attach pid", "(gdb) file path/to/program", "(gdb) help info", "(gdb) br file:line", "(gdb) br line", "(gdb) br function_name", "(gdb) br file:line if condition", "(gdb) info br", "(gdb) delete number", "(gdb) clear file:line", "(gdb) watch expression", "(gdb) rwatch expression", "(gdb) awatch expression", "(gdb) info br", "(gdb) delete num", "yum install strace", "strace -fvttTyy -s 256 -e trace= call program", "ps -C program (...) strace -fvttTyy -s 256 -e trace= call -p pid", "strace ...-o |tee your_log_file.log >&2", "strace ... -ff -o your_log_file", "ltrace -f -l library -e function program", "ps -C program (...) ltrace ... -p pid", "ltrace ... |& tee your_log_file.log", "probe begin { printf(\"waiting for syscalls of process %d \\n\", target()) } probe syscall.* { if (pid() == target()) printf(\"%s(%s)\\n\", name, argstr) } probe process.end { if (pid() == target()) exit() }", "ps -aux", "stap my_script.stp -x pid", "(gdb) catch syscall syscall-name", "(gdb) r", "(gdb) c", "(gdb) catch signal signal-type", "(gdb) r", "(gdb) c" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/debugging-running-application
Chapter 98. KafkaTopicStatus schema reference
Chapter 98. KafkaTopicStatus schema reference Used in: KafkaTopic Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer topicName Topic name. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkatopicstatus-reference
26.8. Installing a CA Into an Existing IdM Domain
26.8. Installing a CA Into an Existing IdM Domain If an IdM domain was installed without a Certificate Authority (CA), you can install the CA services subsequently. Depending on your environment, you can install the IdM Certificate Server CA or use an external CA. Note For details on the supported CA configurations, see Section 2.3.2, "Determining What CA Configuration to Use" . Installing an IdM Certificate Server Use the following command to install the IdM Certificate Server CA: Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. Installing External CA The subsequent installation of an external CA consists of multiple steps: Start the installation: After this step an information is shown that a certificate signing request (CSR) was saved. Submit the CSR to the external CA and copy the issued certificate to the IdM server. Continue the installation with passing the certificates and full path to the external CA files to ipa-ca-install : Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. The CA installation does not replace the existing service certificates for the LDAP and web server with ones issued by the new installed CA. For details how to replace the certificates, see Section 26.9, "Replacing the Web Server's and LDAP Server's Certificate" .
[ "[root@ipa-server ~] ipa-ca-install", "[root@ipa-server ~] ipa-ca-install --external-ca", "ipa-ca-install --external-cert-file=/root/ master .crt --external-cert-file=/root/ca.crt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/ca-less-to-ca
25.11.2. Related Books
25.11.2. Related Books Apache: The Definitive Guide , 3rd edition, by Ben Laurie and Peter Laurie, O'Reilly & Associates, Inc.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Additional_Resources-Related_Books
E.9. Model Extension Definition Editor
E.9. Model Extension Definition Editor The MED Editor is a editor with multiple tabs and is used to create and edit user defined MEDs ( *.mxd files) in the workspace. The MED Editor has 3 sub-editors (Overview, Properties, and Source) which share a common header section. Here are the MED sub-editor tabs: Overview Sub-Editor - this editor is where the general MED information is managed. This information includes the namespace prefix, namespace URI, extended model class, and the description. The Overview sub-editor looks like this: Figure E.25. Overview Tab Properties Sub-Editor - this editor is where the MED extension properties are managed. Each extension property must be associated with a model object type. The Properties sub-editor is divided into 2 sections (Extended Model Objects and Extension Properties) and looks like this: Figure E.26. Properties Tab Source - this tab is a read-only XML source viewer to view the details of your MED. This source viewer is NOT editable. The GUI components on the Overview and Properties sub-editors will be decorated with an error icon when the data in that GUI component has a validation error. Hovering over an error decoration displays a tooltip with the specific error message. Those error message relate to the error messages shown in the common header section. Here is an example of the error decoration: Figure E.27. Text Field With Error The MED sub-editors share a header section. The header is composed of the following: Status Image - an image indicating the most severe validation message (error, warning, or info). If there are no validation messages the model extension image is shown. Title - the title of the sub-editor being shown. Menu - a drop-down menu containing actions for (1) adding to and updating the MED in the registry, and (2) for showing the Model Extension Registry View. Validation Message - this area will display an OK message or an error summary message. When a summary message is shown, the tooltip for that message will enumerate all the messages. Toolbar - contains the same actions as the drop-down menu. Below is an example of the shared header section which includes an error message tooltip. Figure E.28. Shared Header Example
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/model_extension_definition_editor
Chapter 9. Kafka clients
Chapter 9. Kafka clients The kafka-clients JAR file contains the Kafka Producer and Consumer APIs together with the Kafka AdminClient API. The Producer API allows applications to send data to a Kafka broker. The Consumer API allows applications to consume data from a Kafka broker. The AdminClient API provides functionality for managing Kafka clusters, including topics, brokers, and other components. 9.1. Adding Kafka clients as a dependency to your Maven project This procedure shows you how to add the AMQ Streams Java clients as a dependency to your Maven project. Prerequisites A Maven project with an existing pom.xml . Procedure Add the Red Hat Maven repository to the <repositories> section of your pom.xml file. Add the clients to the <dependencies> section of your pom.xml file. Build your Maven project.
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <!-- ... --> <repositories> <repository> <id>redhat-maven</id> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- ... --> </project>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <!-- ... --> <dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>3.1.0.redhat-00004</version> </dependency> </dependencies> <!-- ... --> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/assembly-kafka-clients-str
Chapter 4. Working with SELinux
Chapter 4. Working with SELinux The following sections give a brief overview of the main SELinux packages in Red Hat Enterprise Linux; installing and updating packages; which log files are used; the main SELinux configuration file; enabling and disabling SELinux; SELinux modes; configuring Booleans; temporarily and persistently changing file and directory labels; overriding file system labels with the mount command; mounting NFS volumes; and how to preserve SELinux contexts when copying and archiving files and directories. 4.1. SELinux Packages In Red Hat Enterprise Linux full installation, the SELinux packages are installed by default unless they are manually excluded during installation. If performing a minimal installation in text mode, the policycoreutils-python and the policycoreutils-gui package are not installed by default. Also, by default, SELinux runs in enforcing mode and the SELinux targeted policy is used. The following SELinux packages are installed on your system by default: policycoreutils provides utilities such as restorecon , secon , setfiles , semodule , load_policy , and setsebool , for operating and managing SELinux. selinux-policy provides a basic directory structure, the selinux-policy.conf file, and RPM macros. selinux-policy-targeted provides the SELinux targeted policy. libselinux - provides an API for SELinux applications. libselinux-utils provides the avcstat , getenforce , getsebool , matchpathcon , selinuxconlist , selinuxdefcon , selinuxenabled , and setenforce utilities. libselinux-python provides Python bindings for developing SELinux applications. The following packages are not installed by default but can be optionally installed by running the yum install <package-name> command: selinux-policy-devel provides utilities for creating a custom SELinux policy and policy modules. selinux-policy-doc provides manual pages that describe how to configure SELinux altogether with various services. selinux-policy-mls provides the MLS (Multi-Level Security) SELinux policy. setroubleshoot-server translates denial messages, produced when access is denied by SELinux, into detailed descriptions that can be viewed with the sealert utility, also provided in this package. setools-console provides the Tresys Technology SETools distribution, a number of utilities and libraries for analyzing and querying policy, audit log monitoring and reporting, and file context management. The setools package is a meta-package for SETools. The setools-gui package provides the apol and seaudit utilities. The setools-console package provides the sechecker , sediff , seinfo , sesearch , and findcon command-line utilities. Note that setools and setools-gui packages are available only when the Red Hat Network Optional channel is enabled. For further information, see Scope of Coverage Details . mcstrans translates levels, such as s0-s0:c0.c1023 , to a form that is easier to read, such as SystemLow-SystemHigh . policycoreutils-python provides utilities such as semanage , audit2allow , audit2why , and chcat , for operating and managing SELinux. policycoreutils-gui provides system-config-selinux , a graphical utility for managing SELinux.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-working_with_selinux
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_in_external_mode/making-open-source-more-inclusive
Getting Started with Fuse on Apache Karaf
Getting Started with Fuse on Apache Karaf Red Hat Fuse 7.13 Get started quickly with Red Hat Fuse on Karaf Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_apache_karaf/index
Chapter 1. Backup and restore
Chapter 1. Backup and restore 1.1. Control plane backup and restore operations As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later. You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member . When you want to get your cluster running again, restart the cluster gracefully . Note A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs) . You might run into several situations where OpenShift Container Platform does not work as expected, such as: You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure or network connectivity issues. You have deleted something critical in the cluster by mistake. You have lost the majority of your control plane hosts, leading to etcd quorum loss. You can always recover from a disaster situation by restoring your cluster to its state using the saved etcd snapshots. Additional resources Quorum protection with machine lifecycle hooks 1.2. Application backup and restore operations As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP). OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool . OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features . 1.2.1. OADP requirements OADP has the following requirements: You must be logged in as a user with a cluster-admin role. You must have object storage for storing backups, such as one of the following storage types: OpenShift Data Foundation Amazon Web Services Microsoft Azure Google Cloud Platform S3-compatible object storage Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS Note If you do not want to back up PVs by using snapshots, you can use Restic , which is installed by the OADP Operator by default. 1.2.2. Backing up and restoring applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR . You can configure the following backup options: Creating backup hooks to run commands before or after the backup operation Scheduling backups Restic backups You restore application backups by creating a Restore (CR). See Creating a Restore CR . You can configure restore hooks to run commands in init containers or in the application container during the restore operation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/backup_and_restore/backup-restore-overview
Chapter 1. Getting Started
Chapter 1. Getting Started You can install Red Hat Enterprise Linux with an installation utility called Anaconda . Most users can simply follow the procedure outlined in Section 4.1, "Interactive Installation" to install Red Hat Enterprise Linux using the graphical interface in Anaconda . Users with advanced requirements can also use the graphical interface to configure many aspects of the installation, and install Red Hat Enterprise Linux on a wide variety of systems. On systems without a local interface, installation can be accessed entirely remotely. Installation can also be automated by using a Kickstart file, and performed with no interaction at all. 1.1. Graphical Installation The Red Hat Enterprise Linux installer, Anaconda , provides a simple graphical method to install Red Hat Enterprise Linux. The graphical installation interface has a built-in help system which can guide you through most installations, even if you have never installed Linux before. However, Anaconda can also be used to configure advanced installation options if required. Anaconda is different from most other operating system installation programs due to its parallel nature. Most installers follow a linear path; you must choose your language first, then you configure networking, and so on. There is usually only one way to proceed at any given time. In the graphical interface in Anaconda you are at first only required to select your language and locale, and then you are presented with a central screen, where you can configure most aspects of the installation in any order you like. While certain parts require others to be completed before configuration - for example, when installing from a network location, you must configure networking before you can select which packages to install - most options in Anaconda can be configured in any order. If a background task, such as network initialization or disk detection, is blocking configuration of a certain option, you can configure unrelated options while waiting for it to complete. Additional differences appear in certain screens; notably the custom partition process is very different from other Linux distributions. These differences are described in each screen's subsection. Some screens will be automatically configured depending on your hardware and the type of media you used to start the installation. You can still change the detected settings in any screen. Screens which have not been automatically configured, and therefore require your attention before you begin the installation, are marked by an exclamation mark. You cannot start the actual installation process before you finish configuring these settings. Installation can also be performed in text mode, however certain options, notably including custom partitioning, are unavailable. See Section 8.3, "Installing in Text Mode" , or if using an IBM Power system or IBM Z, see Section 13.3, "Installing in Text Mode" , or Section 18.4, "Installing in Text Mode" , respectively, for more information.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-getting-started
Part I. Deployment
Part I. Deployment This part provides instruction on how to install and configure a Red Hat Enterprise Linux 7 machine to act as a virtualization host system, and how to install and configure guest virtual machines using the KVM hypervisor.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/part-deployment
Chapter 92. MavenArtifact schema reference
Chapter 92. MavenArtifact schema reference Used in: Plugin The type property is a discriminator that distinguishes use of the MavenArtifact type from JarArtifact , TgzArtifact , ZipArtifact , OtherArtifact . It must have the value maven for the type MavenArtifact . Property Description repository Maven repository to download the artifact from. Applicable to the maven artifact type only. string group Maven group id. Applicable to the maven artifact type only. string artifact Maven artifact id. Applicable to the maven artifact type only. string version Maven version number. Applicable to the maven artifact type only. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifacts will be downloaded, even when the server is considered insecure. boolean type Must be maven . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-mavenartifact-reference
Chapter 7. Configure Network Bonding
Chapter 7. Configure Network Bonding Red Hat Enterprise Linux 7 allows administrators to bind multiple network interfaces together into a single, bonded, channel. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information. Note The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and see https://www.kernel.org/doc/Documentation/networking/bonding.txt 7.1. Understanding the Default Behavior of Controller and Port Interfaces When controlling bonded port interfaces using the NetworkManager daemon, and especially when fault finding, keep the following in mind: Starting the controller interface does not automatically start the port interfaces. Starting a port interface always starts the controller interface. Stopping the controller interface also stops the port interfaces. A controller without ports can start static IP connections. A controller without ports waits for ports when starting DHCP connections. A controller with a DHCP connection waiting for ports completes when a port with a carrier is added. A controller with a DHCP connection waiting for ports continues waiting when a port without a carrier is added.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-configure_network_bonding