title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 3. Standalone upgrade
Chapter 3. Standalone upgrade In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.8 to the latest version of 3.13 is not supported. Instead, users would have to upgrade as follows: 3.8.z 3.9.z 3.9.z 3.10.z 3.10.z 3.11.z 3.11.z 3.12.z 3.12.z 3.13.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.13: 3.10.z 3.13.z 3.11.z 3.13.z 3.12.z 3.13.z For users wanting to upgrade the Red Hat Quay Operator, see Upgrading the Red Hat Quay Operator Overview . This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version. Upgrade to 3.13.z from 3.12.z Upgrade to 3.13.z from 3.11.z Upgrade to 3.13.z from 3.10.z See the Red Hat Quay Release Notes for information on features for individual releases. The general procedure for a manual upgrade consists of the following steps: Stop the Quay and Clair containers. Backup the database and image storage (optional but recommended). Start Clair using the new version of the image. Wait until Clair is ready to accept connections before starting the new version of Quay. 3.1. Accessing images Red Hat Quay image from version 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com , with authentication set up as described in Red Hat Container Registry Authentication . 3.2. Upgrading the Clair PostgreSQL database If you are upgrading Red Hat Quay to version 13, you must migrate your Clair PostgreSQL database version from PostgreSQL version 13 version 15. This requires bringing down your Clair PostgreSQL 13 database and running a migration script to initiate the process. Use the following procedure to upgrade your Clair PostgreSQL database from version 13 to version 15. Important Clair security scans might become temporarily disrupted after the migration procedure has succeeded. Procedure Stop the Red Hat Quay container by entering the following command: USD sudo podman stop <quay_container_name> Stop the Clair container by running the following command: USD sudo podman stop <clair_container_id> Run the following Podman process from SCLOrg's Data Migration procedure, which allows for data migration from a remote PostgreSQL server: USD sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15 1 Insert a name for your Clair PostgreSQL 15 migration database. 2 Your new Clair PostgreSQL 15 database container IP address. Can obtained by running the following command: sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay . 3 You must specify a different volume mount point than the one from your initial Clair PostgreSQL 13 deployment, and modify the access control lists for said directory. For example: USD mkdir -p /host/data/clair-postgresql15-directory USD setfacl -m u:26:-wx /host/data/clair-postgresql15-directory This prevents data from being overwritten by the new container. Stop the Clair PostgreSQL 13 container: USD sudo podman stop <clair_postgresql13_container_name> After completing the PostgreSQL migration, run the Clair PostgreSQL 15 container, using the new data volume mount from Step 3, for example, </host/data/clair-postgresql15-directory:/var/lib/postgresql/data> : USD sudo podman run -d --rm --name <postgresql15-clairv4> \ -e POSTGRESQL_USER=<clair_username> \ -e POSTGRESQL_PASSWORD=<clair_password> \ -e POSTGRESQL_DATABASE=<clair_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5433:5432 \ -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> \ registry.redhat.io/rhel8/postgresql-15 Start the Red Hat Quay container by entering the following command: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} Start the Clair container by entering the following command: USD sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:{productminv} For more information, see Data Migration . 3.3. Upgrade to 3.13.z from 3.12.z 3.3.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15 3.4. Upgrade to 3.13.z from 3.11.z 3.4.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15 3.5. Upgrade to 3.13.z from 3.10.z 3.5.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.13.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.13.3 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6:1-110 Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15
[ "sudo podman stop <quay_container_name>", "sudo podman stop <clair_container_id>", "sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> \\ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15", "mkdir -p /host/data/clair-postgresql15-directory", "setfacl -m u:26:-wx /host/data/clair-postgresql15-directory", "sudo podman stop <clair_postgresql13_container_name>", "sudo podman run -d --rm --name <postgresql15-clairv4> -e POSTGRESQL_USER=<clair_username> -e POSTGRESQL_PASSWORD=<clair_password> -e POSTGRESQL_DATABASE=<clair_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5433:5432 -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> registry.redhat.io/rhel8/postgresql-15", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:{productminv}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/upgrade_red_hat_quay/standalone-upgrade
2.5. Command Line Interfaces
2.5. Command Line Interfaces This section discusses command-line utilities. 2.5.1. "pki" CLI The pki command-line interface (CLI) provides access to various services on the server using the REST interface (see the REST Interface section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . The CLI can be invoked as follows: Note that the CLI options must be placed before the command, and the command parameters after the command. 2.5.1.1. pki CLI Initialization To use the command line interface for the first time, specify a new password and use the following command: This will create a new client NSS database in the ~/.dogtag/nssdb directory. The password must be specified in all CLI operations that uses the client NSS database. Alternatively, if the password is stored in a file, you can specify the file using the -C option. For example: To import the CA certificate into the client NSS database refer to the Importing a certificate into an NSS Database section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . Some commands may require client certificate authentication. To import an existing client certificate and its key into the client NSS database, specify the PKCS #12 file and the password, and execute the following command: Execute the following command to extract the admin client certificate from the .p12 file: Validate and import the admin client certificate as described in the Managing Certificate/Key Crypto Token section in the Red Hat Certificate System Planning, Installation, and Deployment Guide : Important Make sure all intermediate certificates and the root CA certificate have been imported before importing the CA admin client certificate. To import an existing client certificate and its key into the client NSS database, specify the PKCS #12 file and the password, and execute the following command: Verify the client certificate with the following command: 2.5.1.2. Using "pki" CLI The command line interface supports a number of commands organized in a hierarchical structure. To list the top-level commands, execute the pki command without any additional commands or parameters: Some commands have subcommands. To list them, execute pki with the command name and no additional options. For example: To view command usage information, use the --help option: To view manual pages, specify the command line help command: To execute a command that does not require authentication, specify the command and its parameters (if required), for example: To execute a command that requires client certificate authentication, specify the certificate nickname, the client NSS database password, and optionally the server URL: For example: By default, the CLI communicates with the server at http:// local_host_name :8080 . To communicate with a server at a different location, specify the URL with the -U option, for example: 2.5.2. AtoB The AtoB utility decodes the Base64-encoded certificates to their binary equivalents. For example: For further details, more options, and additional examples, see the AtoB (1) man page. 2.5.3. AuditVerify The AuditVerify utility verifies integrity of the audit logs by validating the signature on log entries. Example: The example verifies the audit logs using the Log Signing Certificate ( -n ) in the ~jsmith/auditVerifyDir NSS database ( -d ). The list of logs to verify ( -a ) are in the ~jsmith/auditVerifyDir/logListFile file, comma-separated and ordered chronologically. The prefix ( -P ) to prepend to the certificate and key database file names is empty. The output is verbose ( -v ). For further details, more options, and additional examples, see the AuditVerify (1) man page or Section 16.3.2, "Using Signed Audit Logs" . 2.5.4. BtoA The BtoA utility encodes binary data in Base64. For example: For further details, more options, and additional examples, see the BtoA (1) man page. 2.5.5. CMCRequest The CMCRequest utility creates a certificate issuance or revocation request. For example: Note All options to the CMCRequest utility are specified as part of the configuration filed passed to the utility. See the CMCRequest (1) man page for configuration file options and further information. Also see 4.3. Requesting and Receiving Certificates Using CMC and Section 7.2.1, "Revoking a Certificate Using CMCRequest " . 2.5.6. CMCRevoke Legacy. Do not use. 2.5.7. CMCSharedToken The CMCSharedToken utility encrypts a user passphrase for shared-secred CMC requests. For example: The shared passphrase ( -s ) is encrypted and stored in the cmcSharedtok2.b64 file ( -o ) using the certificate named subsystemCert cert-pki-tomcat ( -n) found in the NSS database in the current directory ( -d ). The default security token internal is used (as -h is not specified) and the token password of myNSSPassword is used for accessing the token. For further details, more options, and additional examples, see the CMCSharedtoken (1) man page and also Section 7.2.1, "Revoking a Certificate Using CMCRequest " . 2.5.8. CRMFPopClient The CRMFPopClient utility is Certificate Request Message Format (CRMF) client using NSS databases and supplying Proof of Possession. Example: This example creates a new CSR with the cn=subject_name subject DN ( -n) , NSS database in the current directory ( -d ), certificate to use for transport kra.transport ( -b ), the AES/CBC/PKCS5Padding key wrap algorithm verbose output is specified ( -v ) and the resulting CSR is written to the /user_or_entity_database_directory/example.csr file ( -o) . For further details, more options, and additional examples, see the output of the CRMFPopClient --help command and also Section 7.2.1, "Revoking a Certificate Using CMCRequest " . 2.5.9. HttpClient The HttpClient utility is an NSS-aware HTTP client for submitting CMC requests. Example: Note All parameters to the HttpClient utility are stored in the request.cfg file. For further information, see the output of the HttpClient --help command. 2.5.10. OCSPClient An Online Certificate Status Protocol (OCSP) client for checking the certificate revocation status. Example: This example queries the server.example.com OCSP server ( -h ) on port 8080 ( -p ) to check whether the certificate signed by caSigningcet cert-pki-ca ( -c ) with serial number 2 ( --serial ) is valid. The NSS database in the /etc/pki/pki-tomcat/alias directory is used. For further details, more options, and additional examples, see the output of the OCSPClient --help command. 2.5.11. PKCS10Client The PKCS10Client utility creates a CSR in PKCS10 format for RSA and EC keys, optionally on an HSM. Example: This example creates a new RSA ( -a ) key with 2048 bits ( -l ) in the /etc/dirsrv/slapd-instance_name/ directory ( -d with database password password ( -p ). The output CSR is stored in the ~/ds.cfg file ( -o ) and the certificate DN is CN=USDHOSTNAME ( -n ). For further details, more options, and additional examples, see the PKCS10Client (1) man page. 2.5.12. PrettyPrintCert The PrettyPrintCert utility displays the contents of a certificate in a human-readable format. Example: This command parses the output of the ascii_data.cert file and displays its contents in human readable format. The output includes information like signature algorithm, exponent, modulus, and certificate extensions. For further details, more options, and additional examples, see the PrettyPrintCert (1) man page. 2.5.13. PrettyPrintCrl The PrettyPrintCrl utility displays the content of a CRL file in a human readable format. Example: This command parses the output of the ascii_data.crl and displays its contents in human readable format. The output includes information, such as revocation signature algorithm, the issuer of the revocation, and a list of revoked certificates and their reason. For further details, more options, and additional examples, see the PrettyPrintCrl (1) man page. 2.5.14. TokenInfo The TokenInfo utility lists all tokens in an NSS database. Example: This command lists all tokens (HSMs, soft tokens, and so on) registered in the specified database directory. For further details, more options, and additional examples, see the output of the TokenInfo command 2.5.15. tkstool The tkstool utility is interacting with the token Key Service (TKS) subsystem. Example: This command creates a new master key ( -M ) named new_master ( -n ) in the /var/lib/pki/pki-tomcat/alias NSS database on the HSM token_name For further details, more options, and additional examples, see the output of the tkstool -H command.
[ "pki [CLI options] <command> [command parameters]", "pki -c <password> client-init", "pki -C password_file client-init", "openssl pkcs12 -in file -clcerts -nodes -nokeys -out file.crt", "PKICertImport -d ~/.dogtag/nssdb -n \" nickname \" -t \",,\" -a -i file.crt -u C", "pki -c <password> pkcs12-import --pkcs12-file <file> --pkcs12-password <password>", "certutil -V -u C -n \" nickname \" -d ~/.dogtag/nssdb", "pki", "pki ca", "pki ca-cert", "pki --help", "pki ca-cert-find --help", "pki help", "pki help ca-cert-find", "pki ca-cert-find", "pki -U <server URL> -n <nickname> -c <password> <command> [command parameters]", "pki -n jsmith -c password ca-user-find", "pki -U https://server.example.com:8443 -n jsmith -c password ca-user-find", "AtoB input.ascii output.bin", "AuditVerify -d ~jsmith/auditVerifyDir -n Log Signing Certificate -a ~jsmith/auditVerifyDir/logListFile -P \"\" -v", "BtoA input.bin output.ascii", "CMCRequest example.cfg", "CMCSharedToken -d . -p myNSSPassword -s \"shared_passphrase\" -o cmcSharedTok2.b64 -n \"subsystemCert cert-pki-tomcat\"", "CRMFPopClient -d . -p password -n \"cn=subject_name\" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -t false -v -o /user_or_entity_database_directory/example.csr", "HttpClient request.cfg", "OCSPClient -h server.example.com -p 8080 -d /etc/pki/pki-tomcat/alias -c \"caSigningCert cert-pki-ca\" --serial 2", "PKCS10Client -d /etc/dirsrv/slapd-instance_name/ -p password -a rsa -l 2048 -o ~/ds.csr -n \"CN=USDHOSTNAME\"", "PrettyPrintCert ascii_data.cert", "PrettyPrintCrl ascii_data.crl", "TokenInfo ./nssdb/", "tkstool -M -n new_master -d /var/lib/pki/pki-tomcat/alias -h token_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/sect-interfaces-command-line
Chapter 33. Compiler and Tools
Chapter 33. Compiler and Tools Multiple bugs when booting from SAN over FCoE Multiple bugs have arisen from the current implementation of boot from Storage Area Network (SAN) using Fibre Channel over Ethernet (FCoE). Red Hat is targeting a future release of Red Hat Enterprise Linux 7 for the fixes for these bugs. For a list of the affected bugs and workarounds (where available), please contact your Red Hat support representative. Valgrind cannot run programs built against an earlier version of Open MPI Red Hat Enterprise Linux 7.2 supports only the Open MPI application binary interface (ABI) in version 1.10, which is incompatible with the previously shipped 1.6 version of the Open MPI ABI. As a consequence, programs that are built against the earlier version of Open MPI cannot be run under Valgrind included in Red Hat Enterprise Linux 7.2. To work around this problem, use the Red Hat Developer Toolset version of Valgrind for programs linked against Open MPI version 1.6. Synthetic functions generated by GCC confuse SystemTap A GCC optimization can generate synthetic functions for partially inlined copies of other functions. These synthetic functions look like first-class functions and confuse tools such as SystemTap and GDB because SystemTap probes can be placed on both synthetic and real function entry points. This can result in multiple SystemTap probe hits per a single underlying function call. To work around this problem, a SystemTap script may need to adopt countermeasures, such as detecting recursion and suppressing probes related to inlined partial functions. For example, the following script: probe kernel.function("can_nice").call { } could attempt to avoid the described problem as follows: global in_can_nice% probe kernel.function("can_nice").call { in_can_nice[tid()] ++; if (in_can_nice[tid()] > 1) { } /* real probe handler here */ } probe kernel.function("can_nice").return { in_can_nice[tid()] --; } Note that this script does not take into account all possible scenarios. It would not work as expected in case of, for example, missed kprobes or kretprobes, or genuine intended recursion. SELinux AVC generated when ABRT collects backtraces If the new, optional ABRT feature that allows collecting backtraces from crashed processes without the need to write a core-dump file to disk is enabled (using the CreateCoreBacktrace option in the /etc/abrt/plugins/CCpp.conf configuration file), an SELinux AVC message is generated when the abrt-hook-ccpp tool tries to use the sigchld access on a crashing process in order to get the list of functions on the process' stack. GDB keeps watchpoints active even after reporting them as hit In some cases, on the 64-bit ARM architecture, GDB can incorrectly keep watchpoints active even after reporting them as hit. This results in the watchpoints getting hit for the second time, only this time the hardware indication is no longer recognized as a watchpoint and is printed as a generic SIGTRAP signal instead. There are several ways to work around this problem and stop the excessive SIGTRAP reporting. * Type continue when seeing a SIGTRAP after a watchpoint has been hit. * Instruct GDB to ignore the SIGTRAP signal by adding the following line to your ~/.gdbinit configuration file: handle SIGTRAP nostop noprint * Use software watchpoints instead of their hardware equivalents. Note that the debugging is significantly slower with software watchpoints, and only the watch command is available (not rwatch or awatch ). Add the following line to your ~/.gdbinit configuration file: set can-use-hw-watchpoints 0 Booting fails using grubaa64.efi Due to issues in pxeboot or the PXE configuration file, installing Red Hat Enterprise Linux 7.2 using the 7.2 grubaa64.efi boot loader either fails or experiences significant delay in booting the operating system. As a workaround, use the 7.1 grubaa64.efi file instead of the 7.2 grubaa64.efi file when installing Red Hat Enterprise Linux 7.2. MPX feature in GCC requires Red Hat Developer Toolset version of the libmpx library The libmpxwrappers library is missing in the gcc-libraries version of the libmpx library. As a consequence, the Memory Protection Extensions (MPX) feature might not work correctly in GCC, and the application might not link properly. To work around this problem, use the Red Hat Developer Toolset 4.0 version of the libmpx library.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/known-issues-compiler_and_tools
10.4. Starting Geo-replication
10.4. Starting Geo-replication This section describes how to and start geo-replication in your storage environment, and verify that it is functioning correctly. Section 10.4.1, "Starting a Geo-replication Session" Section 10.4.2, "Verifying a Successful Geo-replication Deployment" Section 10.4.3, "Displaying Geo-replication Status Information" Section 10.4.4, "Configuring a Geo-replication Session" Section 10.4.5, "Stopping a Geo-replication Session" Section 10.4.6, "Deleting a Geo-replication Session" 10.4.1. Starting a Geo-replication Session Important You must create the geo-replication session before starting geo-replication. For more information, see Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" . To start geo-replication, use one of the following commands: To start the geo-replication session between the hosts: For example: This command will start distributed geo-replication on all the nodes that are part of the master volume. If a node that is part of the master volume is down, the command will still be successful. In a replica pair, the geo-replication session will be active on any of the replica nodes, but remain passive on the others. After executing the command, it may take a few minutes for the session to initialize and become stable. Note If you attempt to create a geo-replication session and the slave already has data, the following error message will be displayed: To start the geo-replication session forcefully between the hosts: For example: This command will force start geo-replication sessions on the nodes that are part of the master volume. If it is unable to successfully start the geo-replication session on any node which is online and part of the master volume, the command will still start the geo-replication sessions on as many nodes as it can. This command can also be used to re-start geo-replication sessions on the nodes where the session has died, or has not started. 10.4.2. Verifying a Successful Geo-replication Deployment You can use the status command to verify the status of geo-replication in your environment: For example: 10.4.3. Displaying Geo-replication Status Information The status command can be used to display information about a specific geo-replication master session, master-slave session, or all geo-replication sessions. The status output provides both node and brick level information. To display information about all geo-replication sessions, use the following command: To display information on all geo-replication sessions from a particular master volume, use the following command: To display information of a particular master-slave session, use the following command: Important There will be a mismatch between the outputs of the df command (including -h and -k ) and inode of the master and slave volumes when the data is in full sync. This is due to the extra inode and size consumption by the changelog journaling data, which keeps track of the changes done on the file system on the master volume. Instead of running the df command to verify the status of synchronization, use # gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status detail instead. The geo-replication status command output provides the following information: Master Node : Master node and Hostname as listed in the gluster volume info command output Master Vol : Master volume name Master Brick : The path of the brick Slave User : Slave user name Slave : Slave volume name Slave Node : IP address/hostname of the slave node to which master worker is connected to. Status : The status of the geo-replication worker can be one of the following: Initializing : This is the initial phase of the Geo-replication session; it remains in this state for a minute in order to make sure no abnormalities are present. Created : The geo-replication session is created, but not started. Active : The gsync daemon in this node is active and syncing the data. Passive : A replica pair of the active node. The data synchronization is handled by the active node. Hence, this node does not sync any data. Faulty : The geo-replication session has experienced a problem, and the issue needs to be investigated further. For more information, see Section 10.12, "Troubleshooting Geo-replication" section. Stopped : The geo-replication session has stopped, but has not been deleted. Crawl Status : Crawl status can be one of the following: Changelog Crawl: The changelog translator has produced the changelog and that is being consumed by gsyncd daemon to sync data. Hybrid Crawl : The gsyncd daemon is crawling the glusterFS file system and generating pseudo changelog to sync data. History Crawl : The gsyncd daemon consumes the history changelogs produced by the changelog translator to sync data. Last Synced : The last synced time. Entry : The number of pending entry (CREATE, MKDIR, RENAME, UNLINK etc) operations per session. Data : The number of Data operations pending per session. Meta : The number of Meta operations pending per session. Failures : The number of failures. If the failure count is more than zero, view the log files for errors in the Master bricks. Checkpoint Time : Displays the date and time of the checkpoint, if set. Otherwise, it displays as N/A. Checkpoint Completed : Displays the status of the checkpoint. Checkpoint Completion Time : Displays the completion time if Checkpoint is completed. Otherwise, it displays as N/A. 10.4.4. Configuring a Geo-replication Session To configure a geo-replication session, use the following command: For example: For example, to view the list of all option/value pairs: To delete a setting for a geo-replication config option, prefix the option with ! (exclamation mark). For example, to reset log-level to the default value: Warning You must ensure to perform these configuration changes when all the peers in cluster are in Connected (online) state. If you change the configuration when any of the peer is down, the geo-replication cluster would be in inconsistent state when the node comes back online. Configurable Options The following table provides an overview of the configurable options for a geo-replication setting: Option Description gluster_log_file LOGFILE The path to the geo-replication glusterfs log file. gluster_log_level LOGFILELEVEL The log level for glusterfs processes. log_file LOGFILE The path to the geo-replication log file. log_level LOGFILELEVEL The log level for geo-replication. changelog_log_level LOGFILELEVEL The log level for the changelog. The default log level is set to INFO. changelog_batch_size SIZEINBYTES The total size for the changelog in a batch. The default size is set to 727040 bytes. ssh_command COMMAND The SSH command to connect to the remote machine (the default is SSH ). sync_method NAME The command to use for setting synchronizing method for the files. The available options are rsync or tarssh . The default is rsync . The tarssh allows tar over Secure Shell protocol. Use tarssh option to handle workloads of files that have not undergone edits. Note On a RHEL 8.3 or above, before configuring the sync_method as _tarssh_ , make sure to install _tar_ package. volume_id= UID The command to delete the existing master UID for the intermediate/slave node. timeout SECONDS The timeout period in seconds. sync_jobs N The number of sync-jobs represents the maximum number of syncer threads (rsync processes or tar over ssh processes for syncing) inside each worker. The number of workers is always equal to the number of bricks in the Master volume. For example, a distributed-replicated volume of (3 x 2) with sync-jobs configured at 3 results in 9 total sync-jobs (aka threads) across all nodes/servers. Active and Passive Workers : The number of active workers is based on the volume configuration. In case of a distribute volume, all bricks (workers) will be active and participate in syncing. In case of replicate or dispersed volume, one worker from each replicate/disperse group (subvolume) will be active and participate in syncing. This is to avoid duplicate syncing from other bricks. The remaining workers in each replicate/disperse group (subvolume) will be passive. In case the active worker goes down, one of the passive worker from the same replicate/disperse group will become an active worker. ignore_deletes If this option is set to true , a file deleted on the master will not trigger a delete operation on the slave. As a result, the slave will remain as a superset of the master and can be used to recover the master in the event of a crash and/or accidental delete. If this option is set to false , which is the default config option for ignore-deletes , a file deleted on the master will trigger a delete operation on the slave. checkpoint [LABEL|now] Sets a checkpoint with the given option LABEL . If the option is set as now , then the current time will be used as the label. sync_acls [true | false] Syncs acls to the Slave cluster. By default, this option is enabled. Note Geo-replication can sync acls only with rsync as the sync engine and not with tarssh as the sync engine. sync_xattrs [true | false] Syncs extended attributes to the Slave cluster. By default, this option is enabled. Note Geo-replication can sync extended attributes only with rsync as the sync engine and not with tarssh as the sync engine. log_rsync_performance [true | false] If this option is set to enable , geo-replication starts recording the rsync performance in log files. By default, this option is disabled. rsync_options Additional options to rsync. For example, you can limit the rsync bandwidth usage "--bwlimit=<value>". use_meta_volume [true | false] Set this option to enable , to use meta volume in Geo-replication. By default, this option is disabled. Note For more information on meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . meta_volume_mnt PATH The path of the meta volume mount point. gfid_conflict_resolution [true | false] Auto GFID conflict resolution feature provides an ability to automatically detect and fix the GFID conflicts between master and slave. This configuration option provides an ability to enable or disable this feature. By default, this option is true . special_sync_mode Speeds up the recovery of data from slave. Adds capability to geo-replication to ignore the files created before enabling indexing option. Tunables for failover or failback mechanism: None : gsyncd behaves as normal. blind : gsyncd works with xtime pairs to identify candidates for synchronization. wrapup : same as normal mode but does not assign xtimes to orphaned files. recover : files are only transferred if they are identified as changed on the slave. Note Use this mode after ensuring that the number of files in the slave is equal to that of master. Geo-replication will synchronize only those files which are created after making Slave volume as Master volume. 10.4.4.1. Geo-replication Checkpoints 10.4.4.1.1. About Geo-replication Checkpoints Geo-replication data synchronization is an asynchronous process, so changes made on the master may take time to be replicated to the slaves. Data replication to a slave may also be interrupted by various issues, such network outages. Red Hat Gluster Storage provides the ability to set geo-replication checkpoints. By setting a checkpoint, synchronization information is available on whether the data that was on the master at that point in time has been replicated to the slaves. 10.4.4.1.2. Configuring and Viewing Geo-replication Checkpoint Information To set a checkpoint on a geo-replication session, use the following command: For example, to set checkpoint between Volume1 and storage.backup.com:/data/remote_dir : The label for a checkpoint can be set as the current time using now , or a particular label can be specified, as shown below: To display the status of a checkpoint for a geo-replication session, use the following command: To delete checkpoints for a geo-replication session, use the following command: For example, to delete the checkpoint set between Volume1 and storage.backup.com::slave-vol : 10.4.5. Stopping a Geo-replication Session To stop a geo-replication session for a root user, use one of the following commands: To stop a geo-replication session between the hosts: For example: Note The stop command will fail if: any node that is a part of the volume is offline. if it is unable to stop the geo-replication session on any particular node. if the geo-replication session between the master and slave is not active. To stop a geo-replication session forcefully between the hosts: For example: Using force will stop the geo-replication session between the master and slave even if any node that is a part of the volume is offline. If it is unable to stop the geo-replication session on any particular node, the command will still stop the geo-replication sessions on as many nodes as it can. Using force will also stop inactive geo-replication sessions. To stop a geo-replication session for a non-root user, use the following command: 10.4.6. Deleting a Geo-replication Session Important You must first stop a geo-replication session before it can be deleted. For more information, see Section 10.4.5, "Stopping a Geo-replication Session" . To delete a geo-replication session for a root user, use the following command: reset-sync-time : The geo-replication delete command retains the information about the last synchronized time. Due to this, if the same geo-replication session is recreated, then the synchronization will continue from the time where it was left before deleting the session. For the geo-replication session to not maintain any details about the deleted session, use the reset-sync-time option with the delete command. Now, when the session is recreated, it starts synchronization from the beginning just like a new session. For example: Note The delete command will fail if: any node that is a part of the volume is offline. if it is unable to delete the geo-replication session on any particular node. if the geo-replication session between the master and slave is still active. Important The SSH keys will not removed from the master and slave nodes when the geo-replication session is deleted. You can manually remove the pem files which contain the SSH keys from the /var/lib/glusterd/geo-replication/ directory. To delete a geo-replication session for a non-root user, use the following command:
[ "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol start Starting geo-replication session between Volume1 & storage.backup.com::slave-vol has been successful", "slave-node::slave is not empty. Please delete existing files in slave-node::slave and retry, or use force to continue without deleting the existing files. geo-replication command failed", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start force", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol start force Starting geo-replication session between Volume1 & storage.backup.com::slave-vol has been successful", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol status", "gluster volume geo-replication status [detail]", "gluster volume geo-replication MASTER_VOL status [detail]", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status [detail]", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config [Name] [Value]", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config sync_method rsync", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config '!log-level'", "yum install tar", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config checkpoint [now| LABEL ]", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config checkpoint now geo-replication config updated successfully", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config checkpoint NEW_ACCOUNTS_CREATED geo-replication config updated successfully.", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status detail", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config '!checkpoint'", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol config '!checkpoint' geo-replication config updated successfully", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol stop Stopping geo-replication session between Volume1 & storage.backup.com::slave-vol has been successful", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol stop force Stopping geo-replication session between Volume1 & storage.backup.com::slave-vol has been successful", "gluster volume geo-replication MASTER_VOL geoaccount@ SLAVE_HOST :: SLAVE_VOL stop", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL delete reset-sync-time", "gluster volume geo-replication Volume1 storage.backup.com::slave-vol delete geo-replication command executed successfully", "gluster volume geo-replication MASTER_VOL geoaccount@ SLAVE_HOST :: SLAVE_VOL delete reset-sync-time" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-starting_geo-replication
Preface
Preface You can install Red Hat Developer Hub on Amazon Elastic Kubernetes Service (EKS) using one of the following methods: The Red Hat Developer Hub Operator The Red Hat Developer Hub Helm chart
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/pr01
B.2. The URI Failed to Connect to the Hypervisor
B.2. The URI Failed to Connect to the Hypervisor Several different errors can occur when connecting to the server (for example, when running virsh ). B.2.1. Cannot read CA certificate Symptom When running a command, the following error (or similar) appears: Investigation The error message is misleading about the actual cause. This error can be caused by a variety of factors, such as an incorrectly specified URI, or a connection that is not configured. Solution Incorrectly specified URI When specifying qemu://system or qemu://session as a connection URI, virsh attempts to connect to host names system or session respectively. This is because virsh recognizes the text after the second forward slash as the host. Use three forward slashes to connect to the local host. For example, specifying qemu:///system instructs virsh connect to the system instance of libvirtd on the local host. When a host name is specified, the QEMU transport defaults to TLS . This results in certificates. Connection is not configured The URI is correct (for example, qemu[+tls]://server/system ) but the certificates are not set up properly on your machine. For information on configuring TLS, see Setting up libvirt for TLS available from the libvirt website.
[ "virsh -c name_of_uri list error: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory error: failed to connect to the hypervisor" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_Hypervisor_Connection_Fail
14.3.3. Creating SSH CA Certificate Signing Keys
14.3.3. Creating SSH CA Certificate Signing Keys Two types of certificates are required, host certificates and user certificates. It is considered better to have two separate keys for signing the two certificates, for example ca_user_key and ca_host_key , however it is possible to use just one CA key to sign both certificates. It is also easier to follow the procedures if separate keys are used, so the examples that follow will use separate keys. The basic format of the command to sign user's public key to create a user certificate is as follows: ssh-keygen -s ca_user_key -I certificate_ID id_rsa.pub Where -s indicates the private key used to sign the certificate, -I indicates an identity string, the certificate_ID , which can be any alpha numeric value. It is stored as a zero terminated string in the certificate. The certificate_ID is logged whenever the certificate is used for identification and it is also used when revoking a certificate. Having a long value would make logs hard to read, therefore using the host name for host certificates and the user name for user certificates is a safe choice. To sign a host's public key to create a host certificate, add the -h option: ssh-keygen -s ca_host_key -I certificate_ID -h ssh_host_rsa_key.pub Host keys are generated on the system by default, to list the keys, enter a command as follows: Important It is recommended to create and store CA keys in a safe place just as with any other private key. In these examples the root user will be used. In a real production environment using an offline computer with an administrative user account is recommended. For guidance on key lengths see NIST Special Publication 800-131A . Procedure 14.1. Generating SSH CA Certificate Signing Keys On the server designated to be the CA, generate two keys for use in signing certificates. These are the keys that all other hosts need to trust. Choose suitable names, for example ca_user_key and ca_host_key . To generate the user certificate signing key, enter the following command as root : Generate a host certificate signing key, ca_host_key , as follows: If required, confirm the permissions are correct: Create the CA server's own host certificate by signing the server's host public key together with an identification string such as the host name, the CA server's fully qualified domain name ( FQDN ) but without the trailing . , and a validity period. The command takes the following form: ssh-keygen -s ~/.ssh/ca_host_key -I certificate_ID -h -Z host_name.example.com -V -start:+end /etc/ssh/ssh_host_rsa.pub The -Z option restricts this certificate to a specific host within the domain. The -V option is for adding a validity period; this is highly recommend. Where the validity period is intended to be one year, fifty two weeks, consider the need for time to change the certificates and any holiday periods around the time of certificate expiry. For example:
[ "~]# ls -l /etc/ssh/ssh_host* -rw-------. 1 root root 668 Jul 9 2014 /etc/ssh/ssh_host_dsa_key -rw-r--r--. 1 root root 590 Jul 9 2014 /etc/ssh/ssh_host_dsa_key.pub -rw-------. 1 root root 963 Jul 9 2014 /etc/ssh/ssh_host_key -rw-r--r--. 1 root root 627 Jul 9 2014 /etc/ssh/ssh_host_key.pub -rw-------. 1 root root 1671 Jul 9 2014 /etc/ssh/ssh_host_rsa_key -rw-r--r--. 1 root root 382 Jul 9 2014 /etc/ssh/ssh_host_rsa_key.pub", "~]# ssh-keygen -t rsa -f ~/.ssh/ca_user_key Generating public/private rsa key pair. Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/ca_user_key. Your public key has been saved in /root/.ssh/ca_user_key.pub. The key fingerprint is: 11:14:2f:32:fd:5d:f5:e4:7a:5a:d6:b6:a0:62:c9:1f root@host_name.example.com The key's randomart image is: +--[ RSA 2048]----+ | .+. o| | . o +.| | o + . . o| | o + . . ..| | S . ... *| | . . . .*.| | = E .. | | . o . | | . | +-----------------+", "~]# ssh-keygen -t rsa -f ~/.ssh/ca_host_key Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/ca_host_key. Your public key has been saved in /root/.ssh/ca_host_key.pub. The key fingerprint is: e4:d5:d1:4f:6b:fd:a2:e3:4e:5a:73:52:91:0b:b7:7a root@host_name.example.com The key's randomart image is: +--[ RSA 2048]----+ | .. | | . ....| | . . o +oo| | o . o *o| | S = .| | o. .| | *.E. | | +o= | | .oo. | +-----------------+", "~]# ls -la ~/.ssh total 40 drwxrwxrwx. 2 root root 4096 May 22 13:18 . dr-xr-x---. 3 root root 4096 May 8 08:34 .. -rw-------. 1 root root 1743 May 22 13:15 ca_host_key -rw-r--r--. 1 root root 420 May 22 13:15 ca_host_key.pub -rw-------. 1 root root 1743 May 22 13:14 ca_user_key -rw-r--r--. 1 root root 420 May 22 13:14 ca_user_key.pub -rw-r--r--. 1 root root 854 May 8 05:55 known_hosts -r--------. 1 root root 1671 May 6 17:13 ssh_host_rsa -rw-r--r--. 1 root root 1370 May 7 14:30 ssh_host_rsa-cert.pub -rw-------. 1 root root 420 May 6 17:13 ssh_host_rsa.pub", "~]# ssh-keygen -s ~/.ssh/ca_host_key -I host_name -h -Z host_name.example.com -V -1w:+54w5d /etc/ssh/ssh_host_rsa.pub Enter passphrase: Signed host key /root/.ssh/ssh_host_rsa-cert.pub: id \"host_name\" serial 0 for host_name.example.com valid from 2015-05-15T13:52:29 to 2016-06-08T13:52:29" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-creating_ssh_ca_certificate_signing-keys
Chapter 9. Examples
Chapter 9. Examples Use the following examples to understand how to launch a compute instance post-deployment with various network configurations. 9.1. Example 1: Launching an instance with one NIC on the project and provider networks Use this example to understand how to launch an instance with the private project network and the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a single NIC configuration and requires at least three IP addresses. Prerequisites To complete this example successfully, you must have the following IP addresses available in your environment: One IP address for the OpenStack services. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Enable DNS: Create Neutron networks: Create a virtual router: Create a floating IP: Launch the instance: Assign the floating IP: Replace FLOATING_IP with the address of the floating IP that you create in a step. Test SSH: Replace FLOATING_IP with the address of the floating IP that you create in a step. Network Architecture 9.2. Example 2: Launching an instance with one NIC on the provider network Use this example to understand how to launch an instance with the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a single NIC configuration and requires at least four IP addresses. Prerequisites To complete this example successfully, you must have the following IP addresses available in your environment: One IP address for the OpenStack services. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. One IP address for DHCP on the provider network. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Create Neutron networks: Launch the instance: Test SSH: Replace VM_IP with the address of the virtual machine that you create in the step. Network Architecture 9.3. Example 3: Launching an instance with two NICs on the project and provider networks Use this example to understand how to launch an instance with the private project network and the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a dual NIC configuration and requires at least four IP addresses on the provider network. Prerequisites One IP address for a gateway on the provider network. One IP address for OpenStack endpoints. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Enable DNS: Create Neutron networks: Create a virtual router: Create a floating IP: Launch the instance: Assign the floating IP: Replace FLOATING_IP with the address of the floating IP that you create in a step. Test SSH: Replace FLOATING_IP with the address of the floating IP that you create in a step. Network Architecture
[ "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.25.1 export STANDALONE_HOST=192.168.25.2 export PUBLIC_NETWORK_CIDR=192.168.25.0/24 export PRIVATE_NETWORK_CIDR=192.168.100.0/24 export PUBLIC_NET_START=192.168.25.4 export PUBLIC_NET_END=192.168.25.15 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen -m PEM -t rsa -b 2048 -f ~/.ssh/id_rsa_pem openstack keypair create --public-key ~/.ssh/id_rsa_pem.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --no-dhcp --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public openstack subnet create private-net --subnet-range USDPRIVATE_NETWORK_CIDR --network private", "NOTE: In this case an IP will be automatically assigned from the allocation pool for the subnet. openstack router create vrouter openstack router set vrouter --external-gateway public openstack router add subnet vrouter private-net", "openstack floating ip create public", "openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver", "openstack server add floating ip myserver <FLOATING_IP>", "ssh cirros@<FLOATING_IP>", "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.25.1 export STANDALONE_HOST=192.168.25.2 export VROUTER_IP=192.168.25.3 export PUBLIC_NETWORK_CIDR=192.168.25.0/24 export PUBLIC_NET_START=192.168.25.4 export PUBLIC_NET_END=192.168.25.15 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen -m PEM -t rsa -b 2048 -f ~/.ssh/id_rsa_pem openstack keypair create --public-key ~/.ssh/id_rsa_pem.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public --host-route destination=0.0.0.0/0,gateway=USDGATEWAY --dns-nameserver USDDNS_SERVER", "openstack server create --flavor tiny --image cirros --key-name default --network public --security-group basic myserver", "ssh cirros@<VM_IP>", "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.25.1 export STANDALONE_HOST=192.168.0.2 export PUBLIC_NETWORK_CIDR=192.168.25.0/24 export PRIVATE_NETWORK_CIDR=192.168.100.0/24 export PUBLIC_NET_START=192.168.25.3 export PUBLIC_NET_END=192.168.25.254 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen -m PEM -t rsa -b 2048 -f ~/.ssh/id_rsa_pem openstack keypair create --public-key ~/.ssh/id_rsa_pem.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --no-dhcp --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public openstack subnet create private-net --subnet-range USDPRIVATE_NETWORK_CIDR --network private", "NOTE: In this case an IP will be automatically assigned from the allocation pool for the subnet. openstack router create vrouter openstack router set vrouter --external-gateway public openstack router add subnet vrouter private-net", "openstack floating ip create public", "openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver", "openstack server add floating ip myserver <FLOATING_IP>", "ssh cirros@<FLOATING_IP>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/examples
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/proc_providing-feedback-on-red-hat-documentation_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems
Chapter 1. Features
Chapter 1. Features The features added in this release, and that were not in releases of AMQ Streams, are outlined below. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 1.1. Kafka 2.7.0 support AMQ Streams now supports Apache Kafka version 2.7.0. AMQ Streams uses Kafka 2.7.0. Only Kafka distributions built by Red Hat are supported. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.6.0 and Kafka 2.7.0 Release Notes for additional information. Note Kafka 2.6.x is supported only for the purpose of upgrading to AMQ Streams 1.7. For more information on supported versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Component Details Page . Kafka 2.7.0 requires the same ZooKeeper version as Kafka 2.6.x (ZooKeeper version 3.5.8). Therefore, you do not need to upgrade ZooKeeper when upgrading from AMQ Streams 1.6 to AMQ Streams 1.7.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/release_notes_for_amq_streams_1.7_on_rhel/features-str
Chapter 264. Properties Component
Chapter 264. Properties Component Available as of Camel version 2.3 264.1. URI format Where key is the key for the property to lookup 264.2. Options The Properties component supports 17 options, which are listed below. Name Description Default Type locations (common) A list of locations to load properties. This option will override any default locations and only use the locations from this option. List location (common) A list of locations to load properties. You can use comma to separate multiple locations. This option will override any default locations and only use the locations from this option. String encoding (common) Encoding to use when loading properties file from the file system or classpath. If no encoding has been set, then the properties files is loaded using ISO-8859-1 encoding (latin-1) as documented by java.util.Properties#load(java.io.InputStream) String propertiesResolver (common) To use a custom PropertiesResolver PropertiesResolver propertiesParser (common) To use a custom PropertiesParser PropertiesParser cache (common) Whether or not to cache loaded properties. The default value is true. true boolean propertyPrefix (advanced) Optional prefix prepended to property names before resolution. String propertySuffix (advanced) Optional suffix appended to property names before resolution. String fallbackToUnaugmented Property (advanced) If true, first attempt resolution of property name augmented with propertyPrefix and propertySuffix before falling back the plain property name specified. If false, only the augmented property name is searched. true boolean defaultFallbackEnabled (common) If false, the component does not attempt to find a default for the key by looking after the colon separator. true boolean ignoreMissingLocation (common) Whether to silently ignore if a location cannot be located, such as a properties file not found. false boolean prefixToken (advanced) Sets the value of the prefix token used to identify properties to replace. Setting a value of null restores the default token (link DEFAULT_PREFIX_TOKEN). {{ String suffixToken (advanced) Sets the value of the suffix token used to identify properties to replace. Setting a value of null restores the default token (link DEFAULT_SUFFIX_TOKEN). }} String initialProperties (advanced) Sets initial properties which will be used before any locations are resolved. Properties overrideProperties (advanced) Sets a special list of override properties that take precedence and will use first, if a property exist. Properties systemPropertiesMode (common) Sets the system property mode. 2 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Properties endpoint is configured using URI syntax: with the following path and query parameters: 264.2.1. Path Parameters (1 parameters): Name Description Default Type key Required Property key to use as placeholder String 264.2.2. Query Parameters (6 parameters): Name Description Default Type ignoreMissingLocation (common) Whether to silently ignore if a location cannot be located, such as a properties file not found. false boolean locations (common) A list of locations to load properties. You can use comma to separate multiple locations. This option will override any default locations and only use the locations from this option. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean Tip Resolving property from Java code You can use the method resolvePropertyPlaceholders on the CamelContext to resolve a property from any Java code. 264.3. Using PropertyPlaceholder Available as of Camel 2.3 Camel now provides a new PropertiesComponent in camel-core which allows you to use property placeholders when defining Camel Endpoint URIs. This works much like you would do if using Spring's <property-placeholder> tag. However Spring have a limitation which prevents 3rd party frameworks to leverage Spring property placeholders to the fullest. See more at How do I use Spring Property Placeholder with Camel XML . Tip Bridging Spring and Camel property placeholders From Camel 2.10 onwards, you can bridge the Spring property placeholder with Camel, see further below for more details. The property placeholder is generally in use when doing: lookup or creating endpoints lookup of beans in the Registry additional supported in Spring XML (see below in examples) using Blueprint PropertyPlaceholder with Camel Properties component using @PropertyInject to inject a property in a POJO Camel 2.14.1 Using default value if a property does not exists Camel 2.14.1 Include out of the box functions, to lookup property values from OS environment variables, JVM system properties, or the service idiom. Camel 2.14.1 Using custom functions, which can be plugged into the property component. 264.4. Syntax The syntax to use Camel's property placeholder is to use {{key}} for example {{file.uri}} where file.uri is the property key. You can use property placeholders in parts of the endpoint URI's which for example you can use placeholders for parameters in the URIs. From Camel 2.14.1 onwards you can specify a default value to use if a property with the key does not exists, eg file.url:/some/path where the default value is the text after the colon (eg /some/path). Note Do not use colon in the property key. The colon is used as a separator token when you are providing a default value, which is supported from Camel 2.14.1 onwards. 264.5. PropertyResolver Camel provides a pluggable mechanism which allows you part to provide your own resolver to lookup properties. Camel provides a default implementation org.apache.camel.component.properties.DefaultPropertiesResolver which is capable of loading properties from the file system, classpath or Registry. You can prefix the locations with either: ref: Camel 2.4: to lookup in the Registry file: to load from the file system classpath: to load from classpath (this is also the default if no prefix is provided) blueprint: Camel 2.7: to use a specific OSGi blueprint placeholder service 264.6. Defining location The PropertiesResolver need to know a location(s) where to resolve the properties. You can define 1 to many locations. If you define the location in a single String property you can separate multiple locations with comma such as: pc.setLocation("com/mycompany/myprop.properties,com/mycompany/other.properties"); Available as of Camel 2.19.0 You can set which location can be discarded if missing by by setting the optional attribute, which is false by default, i.e: pc.setLocations( "com/mycompany/override.properties;optional=true" "com/mycompany/defaults.properties"); 264.7. Using system and environment variables in locations Available as of Camel 2.7 The location now supports using placeholders for JVM system properties and OS environments variables. For example: In the location above we defined a location using the file scheme using the JVM system property with key karaf.home . To use an OS environment variable instead you would have to prefix with env: Where APP_HOME is an OS environment. You can have multiple placeholders in the same location, such as: #=== Using system and environment variables to configure property prefixes and suffixes Available as of Camel 2.12.5, 2.13.3, 2.14.0 propertyPrefix , propertySuffix configuration properties support using placeholders for JVM system properties and OS environments variables. For example. if PropertiesComponent is configured with the following properties file: Then with the following route definition: PropertiesComponent pc = context.getComponent("properties", PropertiesComponent.class); pc.setPropertyPrefix("USD{stage}."); // ... context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("properties:mock:{{endpoint}}"); } }); it is possible to change the target endpoint by changing system property stage either to dev (the message will be routed to mock:result1 ) or test (the message will be routed to mock:result2 ). 264.8. Configuring in Java DSL You have to create and register the PropertiesComponent under the name properties such as: PropertiesComponent pc = new PropertiesComponent(); pc.setLocation("classpath:com/mycompany/myprop.properties"); context.addComponent("properties", pc); 264.9. Configuring in Spring XML Spring XML offers two variations to configure. You can define a spring bean as a PropertiesComponent which resembles the way done in Java DSL. Or you can use the <propertyPlaceholder> tag. <bean id="properties" class="org.apache.camel.component.properties.PropertiesComponent"> <property name="location" value="classpath:com/mycompany/myprop.properties"/> </bean> Using the <propertyPlaceholder> tag makes the configuration a bit more fresh such as: <camelContext ...> <propertyPlaceholder id="properties" location="com/mycompany/myprop.properties"/> </camelContext> Setting the properties location through the location tag works just fine but sometime you have a number of resources to take into account and starting from Camel 2.19.0 you can set the properties location with a dedicated propertiesLocation: <camelContext ...> <propertyPlaceholder id="myPropertyPlaceholder"> <propertiesLocation resolver = "classpath" path = "com/my/company/something/my-properties-1.properties" optional = "false"/> <propertiesLocation resolver = "classpath" path = "com/my/company/something/my-properties-2.properties" optional = "false"/> <propertiesLocation resolver = "file" path = "USD{karaf.home}/etc/my-override.properties" optional = "true"/> </propertyPlaceholder> </camelContext> Tip Specifying the cache option inside XML Camel 2.10 onwards supports specifying a value for the cache option both inside the Spring as well as the Blueprint XML. 264.10. Using a Properties from the Registry Available as of Camel 2.4 For example in OSGi you may want to expose a service which returns the properties as a java.util.Properties object. Then you could setup the Properties component as follows: <propertyPlaceholder id="properties" location="ref:myProperties"/> Where myProperties is the id to use for lookup in the OSGi registry. Notice we use the ref: prefix to tell Camel that it should lookup the properties for the Registry. 264.11. Examples using properties component When using property placeholders in the endpoint URIs you can either use the properties: component or define the placeholders directly in the URI. We will show example of both cases, starting with the former. // properties cool.end=mock:result // route from("direct:start").to("properties:{{cool.end}}"); You can also use placeholders as a part of the endpoint uri: // properties cool.foo=result // route from("direct:start").to("properties:mock:{{cool.foo}}"); In the example above the to endpoint will be resolved to mock:result . You can also have properties with refer to each other such as: // properties cool.foo=result cool.concat=mock:{{cool.foo}} // route from("direct:start").to("properties:mock:{{cool.concat}}"); Notice how cool.concat refer to another property. The properties: component also offers you to override and provide a location in the given uri using the locations option: from("direct:start").to("properties:bar.end?locations=com/mycompany/bar.properties"); 264.12. Examples You can also use property placeholders directly in the endpoint uris without having to use properties: . // properties cool.foo=result // route from("direct:start").to("mock:{{cool.foo}}"); And you can use them in multiple wherever you want them: // properties cool.start=direct:start cool.showid=true cool.result=result // route from("{{cool.start}}") .to("log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}") .to("mock:{{cool.result}}"); You can also your property placeholders when using ProducerTemplate for example: template.sendBody("{{cool.start}}", "Hello World"); 264.13. Example with Simple language The Simple language now also support using property placeholders, for example in the route below: // properties cheese.quote=Camel rocks // route from("direct:start") .transform().simple("Hi USD{body} do you think USD{properties:cheese.quote}?"); You can also specify the location in the Simple language for example: // bar.properties bar.quote=Beer tastes good // route from("direct:start") .transform().simple("Hi USD{body}. USD{properties:com/mycompany/bar.properties:bar.quote}."); 264.14. Additional property placeholder supported in Spring XML The property placeholders is also supported in many of the Camel Spring XML tags such as <package>, <packageScan>, <contextScan>, <jmxAgent>, <endpoint>, <routeBuilder>, <proxy> and the others. The example below has property placeholder in the <jmxAgent> tag: You can also define property placeholders in the various attributes on the <camelContext> tag such as trace as shown here: 264.15. Overriding a property setting using a JVM System Property Available as of Camel 2.5 It is possible to override a property value at runtime using a JVM System property without the need to restart the application to pick up the change. This may also be accomplished from the command line by creating a JVM System property of the same name as the property it replaces with a new value. An example of this is given below PropertiesComponent pc = context.getComponent("properties", PropertiesComponent.class); pc.setCache(false); System.setProperty("cool.end", "mock:override"); System.setProperty("cool.result", "override"); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("properties:cool.end"); from("direct:foo").to("properties:mock:{{cool.result}}"); } }); context.start(); getMockEndpoint("mock:override").expectedMessageCount(2); template.sendBody("direct:start", "Hello World"); template.sendBody("direct:foo", "Hello Foo"); System.clearProperty("cool.end"); System.clearProperty("cool.result"); assertMockEndpointsSatisfied(); 264.16. Using property placeholders for any kind of attribute in the XML DSL Available as of Camel 2.7 Note If you use OSGi Blueprint then this only works from 2.11.1 or 2.10.5 onwards. Previously it was only the xs:string type attributes in the XML DSL that support placeholders. For example often a timeout attribute would be a xs:int type and thus you cannot set a string value as the placeholder key. This is now possible from Camel 2.7 onwards using a special placeholder namespace. In the example below we use the prop prefix for the namespace http://camel.apache.org/schema/placeholder by which we can use the prop prefix in the attributes in the XML DSLs. Notice how we use that in the Multicast to indicate that the option stopOnException should be the value of the placeholder with the key "stop". In our properties file we have the value defined as 264.17. Using Blueprint property placeholder with Camel routes Available as of Camel 2.7 Camel supports Blueprint which also offers a property placeholder service. Camel supports convention over configuration, so all you have to do is to define the OSGi Blueprint property placeholder in the XML file as shown below: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id="myblueprint.placeholder" persistent-id="camel.blueprint"> <!-- list some properties as needed --> <cm:default-properties> <cm:property name="result" value="mock:result"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <!-- in the route we can use {{ }} placeholders which will lookup in blueprint as Camel will auto detect the OSGi blueprint property placeholder and use it --> <route> <from uri="direct:start"/> <to uri="mock:foo"/> <to uri="{{result}}"/> </route> </camelContext> </blueprint> 264.17.1. Using OSGi blueprint property placeholders in Camel routes By default Camel detects and uses OSGi blueprint property placeholder service. You can disable this by setting the attribute useBlueprintPropertyResolver to false on the <camelContext> definition. 264.17.2. About placeholder syntax Notice how we can use the Camel syntax for placeholders {{ and }} in the Camel route, which will lookup the value from OSGi blueprint. The blueprint syntax for placeholders is USD{ } . So outside the <camelContext> you must use the USD{ } syntax. Where as inside <camelContext> you must use {{ and }} syntax. OSGi blueprint allows you to configure the syntax, so you can actually align those if you want. You can also explicit refer to a specific OSGi blueprint property placeholder by its id. For that you need to use the Camel's <propertyPlaceholder> as shown in the example below: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id="myblueprint.placeholder" persistent-id="camel.blueprint"> <!-- list some properties as needed --> <cm:default-properties> <cm:property name="prefix.result" value="mock:result"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <!-- using Camel properties component and refer to the blueprint property placeholder by its id --> <propertyPlaceholder id="properties" location="blueprint:myblueprint.placeholder" prefixToken="[[" suffixToken="]]" propertyPrefix="prefix."/> <!-- in the route we can use {{ }} placeholders which will lookup in blueprint --> <route> <from uri="direct:start"/> <to uri="mock:foo"/> <to uri="[[result]]"/> </route> </camelContext> </blueprint> 264.18. Explicit referring to a OSGi blueprint placeholder in Camel Notice how we use the blueprint scheme to refer to the OSGi blueprint placeholder by its id. This allows you to mix and match, for example you can also have additional schemes in the location. For example to load a file from the classpath you can do: Each location is separated by comma. 264.19. Overriding Blueprint property placeholders outside CamelContext Available as of Camel 2.10.4 When using Blueprint property placeholder in the Blueprint XML file, you can declare the properties directly in the XML file as shown below: Notice that we have a <bean> which refers to one of the properties. And in the Camel route we refer to the other using the {{ and }} notation. Now if you want to override these Blueprint properties from an unit test, you can do this as shown below: To do this we override and implement the useOverridePropertiesWithConfigAdmin method. We can then put the properties we want to override on the given props parameter. And the return value must be the persistence-id of the <cm:property-placeholder> tag, which you define in the blueprint XML file. 264.20. Using .cfg or .properties file for Blueprint property placeholders Available as of Camel 2.10.4 When using Blueprint property placeholder in the Blueprint XML file, you can declare the properties in a .properties or .cfg file. If you use Apache ServieMix / Karaf then this container has a convention that it loads the properties from a file in the etc directory with the naming etc/pid.cfg , where pid is the persistence-id . For example in the blueprint XML file we have the persistence-id="stuff" , which mean it will load the configuration file as etc/stuff.cfg . Now if you want to unit test this blueprint XML file, then you can override the loadConfigAdminConfigurationFile and tell Camel which file to load as shown below: Notice that this method requires to return a String[] with 2 values. The 1st value is the path for the configuration file to load. The 2nd value is the persistence-id of the <cm:property-placeholder> tag. The stuff.cfg file is just a plain properties file with the property placeholders such as: 264.21. Using .cfg file and overriding properties for Blueprint property placeholders You can do both as well. Here is a complete example. First we have the Blueprint XML file: And in the unit test class we do as follows: And the etc/stuff.cfg configuration file contains 264.22. Bridging Spring and Camel property placeholders Available as of Camel 2.10 The Spring Framework does not allow 3rd party frameworks such as Apache Camel to seamless hook into the Spring property placeholder mechanism. However you can easily bridge Spring and Camel by declaring a Spring bean with the type org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer , which is a Spring org.springframework.beans.factory.config.PropertyPlaceholderConfigurer type. To bridge Spring and Camel you must define a single bean as shown below: Bridging Spring and Camel property placeholders You must not use the spring <context:property-placeholder> namespace at the same time; this is not possible. After declaring this bean, you can define property placeholders using both the Spring style, and the Camel style within the <camelContext> tag as shown below: Using bridge property placeholders Notice how the hello bean is using pure Spring property placeholders using the USD{ } notation. And in the Camel routes we use the Camel placeholder notation with {{ and }} . 264.23. Clashing Spring property placeholders with Camels Simple language Take notice when using Spring bridging placeholder then the spring USD{ } syntax clashes with the Simple in Camel, and therefore take care. For example: <setHeader headerName="Exchange.FILE_NAME"> <simple>{{file.rootdir}}/USD{in.header.CamelFileName}</simple> </setHeader> clashes with Spring property placeholders, and you should use USDsimple{ } to indicate using the Simple language in Camel. <setHeader headerName="Exchange.FILE_NAME"> <simple>{{file.rootdir}}/USDsimple{in.header.CamelFileName}</simple> </setHeader> An alternative is to configure the PropertyPlaceholderConfigurer with ignoreUnresolvablePlaceholders option to true . 264.24. Overriding properties from Camel test kit Available as of Camel 2.10 When Testing with Camel and using the Properties component, you may want to be able to provide the properties to be used from directly within the unit test source code. This is now possible from Camel 2.10 onwards, as the Camel test kits, eg CamelTestSupport class offers the following methods useOverridePropertiesWithPropertiesComponent ignoreMissingLocationWithPropertiesComponent So for example in your unit test classes, you can override the useOverridePropertiesWithPropertiesComponent method and return a java.util.Properties that contains the properties which should be preferred to be used. 264.24.1. Providing properties from within unit test source This can be done from any of the Camel Test kits, such as camel-test, camel-test-spring, and camel-test-blueprint. The ignoreMissingLocationWithPropertiesComponent can be used to instruct Camel to ignore any locations which was not discoverable, for example if you run the unit test, in an environment that does not have access to the location of the properties. 264.25. Using @PropertyInject Available as of Camel 2.12 Camel allows to inject property placeholders in POJOs using the @PropertyInject annotation which can be set on fields and setter methods. For example you can use that with RouteBuilder classes, such as shown below: public class MyRouteBuilder extends RouteBuilder { @PropertyInject("hello") private String greeting; @Override public void configure() throws Exception { from("direct:start") .transform().constant(greeting) .to("{{result}}"); } } Notice we have annotated the greeting field with @PropertyInject and define it to use the key "hello" . Camel will then lookup the property with this key and inject its value, converted to a String type. You can also use multiple placeholders and text in the key, for example we can do: @PropertyInject("Hello {{name}} how are you?") private String greeting; This will lookup the placeholder with they key "name" . You can also add a default value if the key does not exists, such as: @PropertyInject(value = "myTimeout", defaultValue = "5000") private int timeout; 264.26. Using out of the box functions Available as of Camel 2.14.1 The Properties component includes the following functions out of the box env - A function to lookup the property from OS environment variables sys - A function to lookup the property from Java JVM system properties service - A function to lookup the property from OS environment variables using the service naming idiom service.name - Camel 2.16.1: A function to lookup the property from OS environment variables using the service naming idiom returning the hostname part only service.port - Camel 2.16.1: A function to lookup the property from OS environment variables using the service naming idiom returning the port part only As you can see these functions is intended to make it easy to lookup values from the environment. As they are provided out of the box, they can easily be used as shown below: <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:start"/> <to uri="{`{env:SOMENAME}`}"/> <to uri="{`{sys:MyJvmPropertyName}`}"/> </route> </camelContext> You can use default values as well, so if the property does not exists, you can define a default value as shown below, where the default value is a log:foo and log:bar value. <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:start"/> <to uri="{`{env:SOMENAME:log:foo}`}"/> <to uri="{`{sys:MyJvmPropertyName:log:bar}`}"/> </route> </camelContext> The service function is for looking up a service which is defined using OS environment variables using the service naming idiom, to refer to a service location using hostname : port NAME _SERVICE_HOST NAME _SERVICE_PORT in other words the service uses _SERVICE_HOST and _SERVICE_PORT as prefix. So if the service is named FOO, then the OS environment variables should be set as For example if the FOO service a remote HTTP service, then we can refer to the service in the Camel endpoint uri, and use the HTTP component to make the HTTP call: <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:start"/> <to uri="http://{`{service:FOO}`}/myapp"/> </route> </camelContext> And we can use default values if the service has not been defined, for example to call a service on localhost, maybe for unit testing etc <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:start"/> <to uri="http://{`{service:FOO:localhost:8080}`}/myapp"/> </route> </camelContext> 264.27. Using custom functions Available as of Camel 2.14.1 The Properties component allow to plugin 3rd party functions which can be used during parsing of the property placeholders. These functions are then able to do custom logic to resolve the placeholders, such as looking up in databases, do custom computations, or whatnot. The name of the function becomes the prefix used in the placeholder. This is best illustrated in the example code below <bean id="beerFunction" class="MyBeerFunction"/> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <propertyPlaceholder id="properties"> <propertiesFunction ref="beerFunction"/> </propertyPlaceholder> <route> <from uri="direct:start"/> <to uri="{`{beer:FOO}`}"/> <to uri="{`{beer:BAR}`}"/> </route> </camelContext> Note from camel 2.19.0 the location attribute (on propertyPlaceholder tag) is not more mandatory Here we have a Camel XML route where we have defined the <propertyPlaceholder> to use a custom function, which we refer to be the bean id - eg the beerFunction . As the beer function uses "beer" as its name, then the placeholder syntax can trigger the beer function by starting with beer:value . The implementation of the function is only two methods as shown below: public static final class MyBeerFunction implements PropertiesFunction { @Override public String getName() { return "beer"; } @Override public String apply(String remainder) { return "mock:" + remainder.toLowerCase(); } } The function must implement the org.apache.camel.component.properties.PropertiesFunction interface. The method getName is the name of the function, eg beer. And the apply method is where we implement the custom logic to do. As the sample code is from an unit test, it just returns a value to refer to a mock endpoint. To register a custom function from Java code is as shown below: PropertiesComponent pc = context.getComponent("properties", PropertiesComponent.class); pc.addFunction(new MyBeerFunction()); 264.28. See Also Properties component Jasypt for using encrypted values (eg passwords) in the properties
[ "properties:key[?options]", "properties:key", "pc.setLocation(\"com/mycompany/myprop.properties,com/mycompany/other.properties\");", "pc.setLocations( \"com/mycompany/override.properties;optional=true\" \"com/mycompany/defaults.properties\");", "location=file:USD{karaf.home}/etc/foo.properties", "location=file:USD{env:APP_HOME}/etc/foo.properties", "location=file:USD{env:APP_HOME}/etc/USD{prop.name}.properties", "dev.endpoint = result1 test.endpoint = result2", "PropertiesComponent pc = context.getComponent(\"properties\", PropertiesComponent.class); pc.setPropertyPrefix(\"USD{stage}.\"); // context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"properties:mock:{{endpoint}}\"); } });", "PropertiesComponent pc = new PropertiesComponent(); pc.setLocation(\"classpath:com/mycompany/myprop.properties\"); context.addComponent(\"properties\", pc);", "<bean id=\"properties\" class=\"org.apache.camel.component.properties.PropertiesComponent\"> <property name=\"location\" value=\"classpath:com/mycompany/myprop.properties\"/> </bean>", "<camelContext ...> <propertyPlaceholder id=\"properties\" location=\"com/mycompany/myprop.properties\"/> </camelContext>", "<camelContext ...> <propertyPlaceholder id=\"myPropertyPlaceholder\"> <propertiesLocation resolver = \"classpath\" path = \"com/my/company/something/my-properties-1.properties\" optional = \"false\"/> <propertiesLocation resolver = \"classpath\" path = \"com/my/company/something/my-properties-2.properties\" optional = \"false\"/> <propertiesLocation resolver = \"file\" path = \"USD{karaf.home}/etc/my-override.properties\" optional = \"true\"/> </propertyPlaceholder> </camelContext>", "<propertyPlaceholder id=\"properties\" location=\"ref:myProperties\"/>", "// properties cool.end=mock:result // route from(\"direct:start\").to(\"properties:{{cool.end}}\");", "// properties cool.foo=result // route from(\"direct:start\").to(\"properties:mock:{{cool.foo}}\");", "// properties cool.foo=result cool.concat=mock:{{cool.foo}} // route from(\"direct:start\").to(\"properties:mock:{{cool.concat}}\");", "from(\"direct:start\").to(\"properties:bar.end?locations=com/mycompany/bar.properties\");", "// properties cool.foo=result // route from(\"direct:start\").to(\"mock:{{cool.foo}}\");", "// properties cool.start=direct:start cool.showid=true cool.result=result // route from(\"{{cool.start}}\") .to(\"log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}\") .to(\"mock:{{cool.result}}\");", "template.sendBody(\"{{cool.start}}\", \"Hello World\");", "// properties cheese.quote=Camel rocks // route from(\"direct:start\") .transform().simple(\"Hi USD{body} do you think USD{properties:cheese.quote}?\");", "// bar.properties bar.quote=Beer tastes good // route from(\"direct:start\") .transform().simple(\"Hi USD{body}. USD{properties:com/mycompany/bar.properties:bar.quote}.\");", "PropertiesComponent pc = context.getComponent(\"properties\", PropertiesComponent.class); pc.setCache(false); System.setProperty(\"cool.end\", \"mock:override\"); System.setProperty(\"cool.result\", \"override\"); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"properties:cool.end\"); from(\"direct:foo\").to(\"properties:mock:{{cool.result}}\"); } }); context.start(); getMockEndpoint(\"mock:override\").expectedMessageCount(2); template.sendBody(\"direct:start\", \"Hello World\"); template.sendBody(\"direct:foo\", \"Hello Foo\"); System.clearProperty(\"cool.end\"); System.clearProperty(\"cool.result\"); assertMockEndpointsSatisfied();", "stop=true", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id=\"myblueprint.placeholder\" persistent-id=\"camel.blueprint\"> <!-- list some properties as needed --> <cm:default-properties> <cm:property name=\"result\" value=\"mock:result\"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- in the route we can use {{ }} placeholders which will lookup in blueprint as Camel will auto detect the OSGi blueprint property placeholder and use it --> <route> <from uri=\"direct:start\"/> <to uri=\"mock:foo\"/> <to uri=\"{{result}}\"/> </route> </camelContext> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- OSGI blueprint property placeholder --> <cm:property-placeholder id=\"myblueprint.placeholder\" persistent-id=\"camel.blueprint\"> <!-- list some properties as needed --> <cm:default-properties> <cm:property name=\"prefix.result\" value=\"mock:result\"/> </cm:default-properties> </cm:property-placeholder> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- using Camel properties component and refer to the blueprint property placeholder by its id --> <propertyPlaceholder id=\"properties\" location=\"blueprint:myblueprint.placeholder\" prefixToken=\"[[\" suffixToken=\"]]\" propertyPrefix=\"prefix.\"/> <!-- in the route we can use {{ }} placeholders which will lookup in blueprint --> <route> <from uri=\"direct:start\"/> <to uri=\"mock:foo\"/> <to uri=\"[[result]]\"/> </route> </camelContext> </blueprint>", "location=\"blueprint:myblueprint.placeholder,classpath:myproperties.properties\"", "== this is a comment greeting=Bye", "greeting=Bye echo=Yay destination=mock:result", "<setHeader headerName=\"Exchange.FILE_NAME\"> <simple>{{file.rootdir}}/USD{in.header.CamelFileName}</simple> </setHeader>", "<setHeader headerName=\"Exchange.FILE_NAME\"> <simple>{{file.rootdir}}/USDsimple{in.header.CamelFileName}</simple> </setHeader>", "public class MyRouteBuilder extends RouteBuilder { @PropertyInject(\"hello\") private String greeting; @Override public void configure() throws Exception { from(\"direct:start\") .transform().constant(greeting) .to(\"{{result}}\"); } }", "@PropertyInject(\"Hello {{name}} how are you?\") private String greeting;", "@PropertyInject(value = \"myTimeout\", defaultValue = \"5000\") private int timeout;", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:start\"/> <to uri=\"{`{env:SOMENAME}`}\"/> <to uri=\"{`{sys:MyJvmPropertyName}`}\"/> </route> </camelContext>", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:start\"/> <to uri=\"{`{env:SOMENAME:log:foo}`}\"/> <to uri=\"{`{sys:MyJvmPropertyName:log:bar}`}\"/> </route> </camelContext>", "export USDFOO_SERVICE_HOST=myserver export USDFOO_SERVICE_PORT=8888", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:start\"/> <to uri=\"http://{`{service:FOO}`}/myapp\"/> </route> </camelContext>", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:start\"/> <to uri=\"http://{`{service:FOO:localhost:8080}`}/myapp\"/> </route> </camelContext>", "<bean id=\"beerFunction\" class=\"MyBeerFunction\"/> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <propertyPlaceholder id=\"properties\"> <propertiesFunction ref=\"beerFunction\"/> </propertyPlaceholder> <route> <from uri=\"direct:start\"/> <to uri=\"{`{beer:FOO}`}\"/> <to uri=\"{`{beer:BAR}`}\"/> </route> </camelContext>", "public static final class MyBeerFunction implements PropertiesFunction { @Override public String getName() { return \"beer\"; } @Override public String apply(String remainder) { return \"mock:\" + remainder.toLowerCase(); } }", "PropertiesComponent pc = context.getComponent(\"properties\", PropertiesComponent.class); pc.addFunction(new MyBeerFunction());" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/properties-component
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/proc-providing-feedback-on-redhat-documentation
Part I. Planning How to Deploy Red Hat Certificate System
Part I. Planning How to Deploy Red Hat Certificate System This section provides an overview of Certificate System, including general PKI principles and specific features of Certificate System and its subsystems. Planning a deployment is vital to designing a PKI infrastructure that adequately meets the needs of your organization.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/planning_how_to_deploy_rhcs
Chapter 100. YAML DSL
Chapter 100. YAML DSL Since Camel 3.9 The YAML DSL provides the capability to define your Camel routes, route templates & REST DSL configuration in YAML. 100.1. Defining a route A route is a collection of elements defined as follows: - from: 1 uri: "direct:start" steps: 2 - filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" - to: uri: "log:original" Where, 1 Route entry point, by default from and rest are supported. 2 Processing steps Note Each step represents a YAML map that has a single entry where the field name is the EIP name. As a general rule, each step provides all the parameters the related definition declares, but there are some minor differences/enhancements: Output Aware Steps Some steps, such as filter and split , have their own pipeline when an exchange matches the filter expression or for the items generated by the split expression. You can define these pipelines in the steps field: filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" Expression Aware Steps Some EIP, such as filter and split , supports the definition of an expression through the expression field: Explicit Expression field filter: expression: simple: "USD{in.header.continue} == true" To make the DSL less verbose, you can omit the expression field. Implicit Expression field filter: simple: "USD{in.header.continue} == true" In general, expressions can be defined inline, such as within the examples above but if you need provide more information, you can 'unroll' the expression definition and configure any single parameter the expression defines. Full Expression definition filter: tokenize: token: "<" end-token: ">" Data Format Aware Steps The EIP marshal and unmarshal supports the definition of data formats: marshal: json: library: Gson Note In case you want to use the data-format's default settings, you need to place an empty block as data format parameters, like json: {} 100.2. Defining endpoints To define an endpoint with the YAML DSL you have two options: Using a classic Camel URI: - from: uri: "timer:tick?period=1s" steps: - to: uri: "telegram:bots?authorizationToken=XXX" Using URI and parameters: - from: uri: "timer://tick" parameters: period: "1s" steps: - to: uri: "telegram:bots" parameters: authorizationToken: "XXX" 100.3. Defining beans In addition to the general support for creating beans provided by Camel Main , the YAML DSL provide a convenient syntax to define and configure them: - beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar Where, 1 The name of the bean which will bound the instance to the Camel Registry. 2 The full qualified class name of the bean 3 The properties of the bean to be set The properties of the bean can be defined using either a map or properties style, as shown in the example below: - beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p' Note The beans elements is only used as root element. 100.4. Configuring options on languages Some languages have additional configurations that you may need to use. For example, the JSONPath can be configured to ignore JSON parsing errors. This is intended when you use a Content Based Router and want to route the message to different endpoints. The JSON payload of the message can be in different forms, meaning that the JSonPath expressions in some cases would fail with an exception, and other times not. In this situation you must set suppress-exception to true, as shown below: - from: uri: "direct:start" steps: - choice: when: - jsonpath: expression: "person.middlename" suppress-exceptions: true steps: - to: "mock:middle" - jsonpath: expression: "person.lastname" suppress-exceptions: true steps: - to: "mock:last" otherwise: steps: - to: "mock:other" In the route above, the following message would have failed the JSonPath expression person.middlename because the JSON payload does not have a middlename field. To remedy this, we have suppressed the exception. { "person": { "firstname": "John", "lastname": "Doe" } } 100.5. External examples You can find a set of examples using main-yaml in Camel examples that demonstrate how to create the Camel Routes with YAML. You can also refer to Camel Kamelets where each Kamelet is defined using YAML.
[ "- from: 1 uri: \"direct:start\" steps: 2 - filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\" - to: uri: \"log:original\"", "filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\"", "filter: expression: simple: \"USD{in.header.continue} == true\"", "filter: simple: \"USD{in.header.continue} == true\"", "filter: tokenize: token: \"<\" end-token: \">\"", "marshal: json: library: Gson", "- from: uri: \"timer:tick?period=1s\" steps: - to: uri: \"telegram:bots?authorizationToken=XXX\"", "- from: uri: \"timer://tick\" parameters: period: \"1s\" steps: - to: uri: \"telegram:bots\" parameters: authorizationToken: \"XXX\"", "- beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar", "- beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p'", "- from: uri: \"direct:start\" steps: - choice: when: - jsonpath: expression: \"person.middlename\" suppress-exceptions: true steps: - to: \"mock:middle\" - jsonpath: expression: \"person.lastname\" suppress-exceptions: true steps: - to: \"mock:last\" otherwise: steps: - to: \"mock:other\"", "{ \"person\": { \"firstname\": \"John\", \"lastname\": \"Doe\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-yaml-dsl-component-starter
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere As an administrator, you can specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. A failure domain configuration lists parameters that create a topology. The following list states some of these parameters: computeCluster datacenter datastore networks resourcePool After you define multiple regions and zones for your OpenShift Container Platform cluster, you can create or migrate nodes to another failure domain. Important If you want to migrate pre-existing OpenShift Container Platform cluster compute nodes to a failure domain, you must define a new compute machine set for the compute node. This new machine set can scale up a compute node according to the topology of the failure domain, and scale down the pre-existing compute node. The cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to any compute node provisioned by a machine set resource. For more information, see Creating a compute machine set . 10.1. Specifying multiple regions and zones for your cluster on vSphere You can configure the infrastructures.config.openshift.io configuration resource to specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. Topology-aware features for the cloud controller manager and the vSphere Container Storage Interface (CSI) Operator Driver require information about the vSphere topology where you host your OpenShift Container Platform cluster. This topology information exists in the infrastructures.config.openshift.io configuration resource. Before you specify regions and zones for your cluster, you must ensure that all datacenters and compute clusters contain tags, so that the cloud provider can add labels to your node. For example, if datacenter-1 represents region-a and compute-cluster-1 represents zone-1 , the cloud provider adds an openshift-region category label with a value of region-a to datacenter-1 . Additionally, the cloud provider adds an openshift-zone category tag with a value of zone-1 to compute-cluster-1 . Note You can migrate control plane nodes with vMotion capabilities to a failure domain. After you add these nodes to a failure domain, the cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to these nodes. Prerequisites You created the openshift-region and openshift-zone tag categories on the vCenter server. You ensured that each datacenter and compute cluster contains tags that represent the name of their associated region or zone, or both. Optional: If you defined API and Ingress static IP addresses to the installation program, you must ensure that all regions and zones share a common layer 2 network. This configuration ensures that API and Ingress Virtual IP (VIP) addresses can interact with your cluster. Important If you do not supply tags to all datacenters and compute clusters before you create a node or migrate a node, the cloud provider cannot add the topology.kubernetes.io/zone and topology.kubernetes.io/region labels to the node. This means that services cannot route traffic to your node. Procedure Edit the infrastructures.config.openshift.io custom resource definition (CRD) of your cluster to specify multiple regions and zones in the failureDomains section of the resource by running the following command: USD oc edit infrastructures.config.openshift.io cluster Example infrastructures.config.openshift.io CRD for a instance named cluster with multiple regions and zones defined in its configuration spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: "</region_a_dc/host/zone_a_cluster>" resourcePool: "</region_a_dc/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_dc/datastore/datastore_a>" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {} Important After you create a failure domain and you define it in a CRD for a VMware vSphere cluster, you must not modify or delete the failure domain. Doing any of these actions with this configuration can impact the availability and fault tolerance of a control plane machine. Save the resource file to apply the changes. Additional resources Parameters for the cluster-wide infrastructure CRD 10.2. Enabling a multiple layer 2 network for your cluster You can configure your cluster to use a multiple layer 2 network configuration so that data transfer among nodes can span across multiple networks. Prerequisites You configured network connectivity among machines so that cluster components can communicate with each other. Procedure If you installed your cluster with installer-provisioned infrastructure, you must ensure that all control plane nodes share a common layer 2 network. Additionally, ensure compute nodes that are configured for Ingress pod scheduling share a common layer 2 network. If you need compute nodes to span multiple layer 2 networks, you can create infrastructure nodes that can host Ingress pods. If you need to provision workloads across additional layer 2 networks, you can create compute machine sets on vSphere and then move these workloads to your target layer 2 networks. If you installed your cluster on infrastructure that you provided, which is defined as a user-provisioned infrastructure, complete the following actions to meet your needs: Configure your API load balancer and network so that the load balancer can reach the API and Machine Config Server on the control plane nodes. Configure your Ingress load balancer and network so that the load balancer can reach the Ingress pods on the compute or infrastructure nodes. Additional resources Network connectivity requirements Creating infrastructure machine sets for production environments Creating a compute machine set 10.3. Parameters for the cluster-wide infrastructure CRD You must set values for specific parameters in the cluster-wide infrastructure, infrastructures.config.openshift.io , Custom Resource Definition (CRD) to define multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. The following table lists mandatory parameters for defining multiple regions and zones for your OpenShift Container Platform cluster: Parameter Description vcenters The vCenter server for your OpenShift Container Platform cluster. You can only specify one vCenter for your cluster. datacenters vCenter datacenters where VMs associated with the OpenShift Container Platform cluster will be created or presently exist. port The TCP port of the vCenter server. server The fully qualified domain name (FQDN) of the vCenter server. failureDomains The list of failure domains. name The name of the failure domain. region The value of the openshift-region tag assigned to the topology for the failure failure domain. zone The value of the openshift-zone tag assigned to the topology for the failure failure domain. topology The vCenter reources associated with the failure domain. datacenter The datacenter associated with the failure domain. computeCluster The full path of the compute cluster associated with the failure domain. resourcePool The full path of the resource pool associated with the failure domain. datastore The full path of the datastore associated with the failure domain. networks A list of port groups associated with the failure domain. Only one portgroup may be defined. Additional resources Specifying multiple regions and zones for your cluster on vSphere
[ "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/post-install-vsphere-zones-regions-configuration
Chapter 4. Post Installation Configuration
Chapter 4. Post Installation Configuration Red Hat Enterprise Linux Atomic Host is configured using the configuration files in the /etc/ directory. This is similar to the way that Red Hat Enterprise Linux 7 is configured. Because Red Hat Enterprise Linux Atomic Host is a minimal server product that has no desktop, the graphical configuration tools found in the GUI are not available. 4.1. Configuring Networking If you did not configure networking during the installation you may configure it post-installation using the nmcli tool. The following commands create a network connection called atomic , set up a host name, and then activate that connection. For more details on how to use the nmcli tool, see Section 2.3.2. Connecting to a Network Using nmcli in the Red Hat Enterprise Linux 7 Networking Guide. 4.2. Registering RHEL Atomic Host To enable software updates, you must register your Red Hat Enterprise Linux Atomic Host installation. This is done with the subscription-manager command as described below. By default, subscription-manager assumes that your system has Internet access with the ability to reach Red Hat software repositories. If you don't have direct Internet access, you have a couple of options: HTTP proxy : If your system is located on a network that requires the use of an HTTP proxy, see the Red Hat Knowledge Base Article on configuring subscription manager to use an HTTP proxy . The --name= option may be included if you wish to provide an easy-to-remember name to be used when reviewing subscription records. Internal ostree mirror : If your location has no Internet access (even via an HTTP proxy), you can mirror the official Red Hat Enterprise Linux Atomic Host ostree repository for use in a way that is local to your environment. A procedure for setting up such a mirror is available in Using Atomic Host as an internal ostree mirror . Note Red Hat Enterprise Linux Atomic Host works only with Red Hat Subscription Manager (RHSM). Red Hat Enterprise Linux Atomic Host does not work with RHN. Note Red Hat Enterprise Linux Atomic Host registers two product IDs. The first is Product ID 271, Red Hat Enterprise Linux Atomic Host. The second is Product ID 69, Red Hat Enterprise Linux Server. They both use the same entitlement. A properly registered system will display both IDs as is shown below: The subscription-manager command is also documented in Registering from the Command Line of the Red Hat Subscription Management guide. 4.3. Managing User Accounts Currently, some system users that in Red Hat Enterprise Linux 7 would be listed in the /etc/passwd file have been relocated into the read-only /usr/lib/passwd file. Because applications on Red Hat Enterprise Linux Atomic Host are run inside Linux containers, this does not affect deployment. The traditional user management tools, such as useradd , write locally added users to the /etc/passwd file as expected.
[ "nmcli con add type ethernet con-name atomic ifname eth0 nmcli con modify my-office my-office ipv4.dhcp-hostname atomic ipv6.dhcp-hostname atomic nmcli con up atomic", "sudo subscription-manager register --username=<username> --auto-attach", "sudo subscription-manager list +-------------------------------------------+ Installed Product Status +-------------------------------------------+ Product Name: Red Hat Enterprise Linux Atomic Host Product ID: 271 Version: 7 Arch: x86_64 Status: Subscribed Status Details: Starts: 02/27/2015 Ends: 02/26/2016 Product Name: Red Hat Enterprise Linux Server Product ID: 69 Version: 7.1 Arch: x86_64 Status: Subscribed Status Details: Starts: 02/27/2015 Ends: 02/26/2016" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/post_installation_configuration
Chapter 3. Required IAM resources for STS clusters
Chapter 3. Required IAM resources for STS clusters To deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS), you must create the following AWS Identity Access Management (IAM) resources: Specific account-wide IAM roles and policies that provide the STS permissions required for ROSA support, installation, control plane, and compute functionality. This includes account-wide Operator policies. Cluster-specific Operator IAM roles that permit the ROSA cluster Operators to carry out core OpenShift functionality. An OpenID Connect (OIDC) provider that the cluster Operators use to authenticate. If you deploy and manage your cluster using OpenShift Cluster Manager, you must create the following additional resources: An OpenShift Cluster Manager IAM role to complete the installation on your cluster. A user role without any permissions to verify your AWS account identity. This document provides reference information about the IAM resources that you must deploy when you create a ROSA cluster that uses STS. It also includes the aws CLI commands that are generated when you use manual mode with the rosa create command. Additional resources Creating a ROSA cluster with STS using the default options Creating a ROSA cluster with STS using customizations 3.1. OpenShift Cluster Manager roles and permissions If you create ROSA clusters by using OpenShift Cluster Manager , you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. For more information about linking your IAM roles to your AWS account, see Associating your AWS account . These AWS IAM roles are as follows: The ROSA user role ( user-role ) is an AWS role used by Red Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red Hat installer account. An ocm-role resource grants the required permissions for installation of ROSA clusters in OpenShift Cluster Manager. You can apply basic or administrative permissions to the ocm-role resource. If you create an administrative ocm-role resource, OpenShift Cluster Manager can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red Hat installer account as well. Note The ocm-role IAM resource refers to the combination of the IAM role and the necessary policies created with it. You must create this user role as well as an administrative ocm-role resource, if you want to use the auto mode in OpenShift Cluster Manager to create your Operator role policies and OIDC provider. 3.1.1. Understanding the OpenShift Cluster Manager role Creating ROSA clusters in OpenShift Cluster Manager require an ocm-role IAM role. The basic ocm-role IAM role permissions let you to perform cluster maintenance within OpenShift Cluster Manager. To automatically create the operator roles and OpenID Connect (OIDC) provider, you must add the --admin option to the rosa create command. This command creates an ocm-role resource with additional permissions needed for administrative tasks. Note This elevated IAM role allows OpenShift Cluster Manager to automatically create the cluster-specific Operator roles and OIDC provider during cluster creation. For more information about this automatic role and policy creation, see the "Methods of account-wide role creation" link in Additional resources. 3.1.1.1. Understanding the user role In addition to an ocm-role IAM role, you must create a user role so that Red Hat OpenShift Service on AWS can verify your AWS identity. This role has no permissions, and it is only used to create a trust relationship between the installer account and your ocm-role resources. The following tables show the associated basic and administrative permissions for the ocm-role resource. Table 3.1. Associated permissions for the basic ocm-role resource Resource Description iam:GetOpenIDConnectProvider This permission allows the basic role to retrieve information about the specified OpenID Connect (OIDC) provider. iam:GetRole This permission allows the basic role to retrieve any information for a specified role. Some of the data returned include the role's path, GUID, ARN, and the role's trust policy that grants permission to assume the role. iam:ListRoles This permission allows the basic role to list the roles within a path prefix. iam:ListRoleTags This permission allows the basic role to list the tags on a specified role. ec2:DescribeRegions This permission allows the basic role to return information about all of the enabled regions on your account. ec2:DescribeRouteTables This permission allows the basic role to return information about all of your route tables. ec2:DescribeSubnets This permission allows the basic role to return information about all of your subnets. ec2:DescribeVpcs This permission allows the basic role to return information about all of your virtual private clouds (VPCs). sts:AssumeRole This permission allows the basic role to retrieve temporary security credentials to access AWS resources that are beyond its normal permissions. sts:AssumeRoleWithWebIdentity This permission allows the basic role to retrieve temporary security credentials for users authenticated their account with a web identity provider. Table 3.2. Additional permissions for the admin ocm-role resource Resource Description iam:AttachRolePolicy This permission allows the admin role to attach a specified policy to the desired IAM role. iam:CreateOpenIDConnectProvider This permission creates a resource that describes an identity provider, which supports OpenID Connect (OIDC). When you create an OIDC provider with this permission, this provider establishes a trust relationship between the provider and AWS. iam:CreateRole This permission allows the admin role to create a role for your AWS account. iam:ListPolicies This permission allows the admin role to list any policies associated with your AWS account. iam:ListPolicyTags This permission allows the admin role to list any tags on a designated policy. iam:PutRolePermissionsBoundary This permission allows the admin role to change the permissions boundary for a user based on a specified policy. iam:TagRole This permission allows the admin role to add tags to an IAM role. Additional resources Methods of account-wide role creation Creating an ocm-role IAM role You create your ocm-role IAM roles by using the command-line interface (CLI). Prerequisites You have an AWS account. You have Red Hat Organization Administrator privileges in the OpenShift Cluster Manager organization. You have the permissions required to install AWS account-wide roles. You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your installation host. Procedure To create an ocm-role IAM role with basic privileges, run the following command: USD rosa create ocm-role To create an ocm-role IAM role with admin privileges, run the following command: USD rosa create ocm-role --admin This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI ( rosa ) create your Operator roles and policies. See "Methods of account-wide role creation" for more information. Example output I: Creating ocm role ? Role prefix: ManagedOpenShift 1 ? Enable admin capabilities for the OCM role (optional): No 2 ? Permissions boundary ARN (optional): 3 ? Role Path (optional): 4 ? Role creation mode: auto 5 I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>' ? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6 I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' I: Linking OCM role ? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7 ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8 I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>' 1 A prefix value for all of the created AWS resources. In this example, ManagedOpenShift prepends all of the AWS resources. 2 Choose if you want this role to have the additional admin permissions. Note You do not see this prompt if you used the --admin option. 3 The Amazon Resource Name (ARN) of the policy to set permission boundaries. 4 Specify an IAM path for the user name. 5 Choose the method to create your AWS roles. Using auto , the ROSA CLI generates and links the roles and policies. In the auto mode, you receive some different prompts to create the AWS roles. 6 The auto method asks if you want to create a specific ocm-role using your prefix. 7 Confirm that you want to associate your IAM role with your OpenShift Cluster Manager. 8 Links the created role with your AWS organization. AWS IAM roles link to your AWS account to create and manage the clusters. For more information about linking your IAM roles to your AWS account, see Associating your AWS account . Additional resources AWS Identity and Access Management Data Types Amazon Elastic Computer Cloud Data Types AWS Token Security Service Data Types Methods of account-wide role creation 3.2. Account-wide IAM role and policy reference This section provides details about the account-wide IAM roles and policies that are required for ROSA deployments that use STS, including the Operator policies. It also includes the JSON files that define the policies. The account-wide roles and policies are specific to an Red Hat OpenShift Service on AWS minor release version, for example Red Hat OpenShift Service on AWS 4, and are compatible with earlier versions. You can minimize the required STS resources by reusing the account-wide roles and policies for multiple clusters of the same minor version, regardless of their patch version. 3.2.1. Methods of account-wide role creation You can create account-wide roles by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , or the OpenShift Cluster Manager guided installation. You can create the roles manually or by using an automatic process that uses predefined names for these roles and policies. Manual ocm-role resource creation You can use the manual creation method if you have the necessary CLI access to create these roles on your system. You can run this option in your desired CLI tool or from OpenShift Cluster Manager. After you start the manual creation process, the CLI presents a series of commands for you to run that create the roles and link them to the needed policies. Automatic ocm-role resource creation If you created an ocm-role resource with administrative permissions, you can use the automatic creation method from OpenShift Cluster Manager. The ROSA CLI does not require that you have this admin ocm-role IAM resource to automatically create these roles and policies. Selecting this method creates the roles and policies that uses the default names. If you use the ROSA guided installation on OpenShift Cluster Manager, you must have created an ocm-role resource with administrative permissions in the first step of the guided cluster installation. Without this role, you cannot use the automatic Operator role and policy creation option, but you can still create the cluster and its roles and policies with the manual process. Note The account number present in the sts_installer_trust_policy.json and sts_support_trust_policy.json samples represents the Red Hat account that is allowed to assume the required roles. Table 3.3. ROSA installer role, policy, and policy files Resource Description ManagedOpenShift-Installer-Role An IAM role used by the ROSA installer. ManagedOpenShift-Installer-Role-Policy An IAM policy that provides the ROSA installer with the permissions required to complete cluster installation tasks. Example 3.1. sts_installer_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer" ] }, "Action": [ "sts:AssumeRole" ] } ] } Example 3.2. sts_installer_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CopyImage", "ec2:CreateDhcpOptions", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateNetworkInterface", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateVpc", "ec2:CreateVpcEndpoint", "ec2:DeleteDhcpOptions", "ec2:DeleteInternetGateway", "ec2:DeleteNatGateway", "ec2:DeleteNetworkInterface", "ec2:DeleteRoute", "ec2:DeleteRouteTable", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteSubnet", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeleteVpc", "ec2:DeleteVpcEndpoints", "ec2:DeregisterImage", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstanceCreditSpecifications", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeInstanceTypes", "ec2:DescribeInternetGateways", "ec2:DescribeKeyPairs", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInterfaces", "ec2:DescribePrefixLists", "ec2:DescribeRegions", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSecurityGroupRules", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcs", "ec2:DetachInternetGateway", "ec2:DisassociateRouteTable", "ec2:GetConsoleOutput", "ec2:GetEbsDefaultKmsKeyId", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ModifySubnetAttribute", "ec2:ModifyVpcAttribute", "ec2:ReleaseAddress", "ec2:ReplaceRouteTableAssociation", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:DescribeAccountLimits", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "elasticloadbalancing:SetSecurityGroups", "iam:AddRoleToInstanceProfile", "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetInstanceProfile", "iam:TagInstanceProfile", "iam:GetRole", "iam:GetRolePolicy", "iam:GetUser", "iam:ListAttachedRolePolicies", "iam:ListInstanceProfiles", "iam:ListInstanceProfilesForRole", "iam:ListRolePolicies", "iam:ListRoles", "iam:ListUserPolicies", "iam:ListUsers", "iam:PassRole", "iam:RemoveRoleFromInstanceProfile", "iam:SimulatePrincipalPolicy", "iam:TagRole", "iam:UntagRole", "route53:ChangeResourceRecordSets", "route53:ChangeTagsForResource", "route53:CreateHostedZone", "route53:DeleteHostedZone", "route53:GetAccountLimit", "route53:GetChange", "route53:GetHostedZone", "route53:ListHostedZones", "route53:ListHostedZonesByName", "route53:ListResourceRecordSets", "route53:ListTagsForResource", "route53:UpdateHostedZoneComment", "s3:CreateBucket", "s3:DeleteBucket", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetAccelerateConfiguration", "s3:GetBucketAcl", "s3:GetBucketCORS", "s3:GetBucketLocation", "s3:GetBucketLogging", "s3:GetBucketObjectLockConfiguration", "s3:GetBucketPolicy", "s3:GetBucketRequestPayment", "s3:GetBucketTagging", "s3:GetBucketVersioning", "s3:GetBucketWebsite", "s3:GetEncryptionConfiguration", "s3:GetLifecycleConfiguration", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetReplicationConfiguration", "s3:ListBucket", "s3:ListBucketVersions", "s3:PutBucketAcl", "s3:PutBucketPolicy", "s3:PutBucketTagging", "s3:PutBucketVersioning", "s3:PutEncryptionConfiguration", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectTagging", "servicequotas:GetServiceQuota", "servicequotas:ListAWSDefaultServiceQuotas", "sts:AssumeRole", "sts:AssumeRoleWithWebIdentity", "sts:GetCallerIdentity", "tag:GetResources", "tag:UntagResources", "ec2:CreateVpcEndpointServiceConfiguration", "ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServicePermissions", "ec2:DescribeVpcEndpointServices", "ec2:ModifyVpcEndpointServicePermissions", "kms:DescribeKey", "cloudwatch:GetMetricData" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/red-hat-managed": "true" } } } ] } Table 3.4. ROSA control plane role, policy, and policy files Resource Description ManagedOpenShift-ControlPlane-Role An IAM role used by the ROSA control plane. ManagedOpenShift-ControlPlane-Role-Policy An IAM policy that provides the ROSA control plane with the permissions required to manage its components. Example 3.3. sts_instance_controlplane_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] } Example 3.4. sts_instance_controlplane_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadPermissions", "Effect": "Allow", "Action": [ "ec2:DescribeAvailabilityZones", "ec2:DescribeInstances", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:DescribeLoadBalancerPolicies" ], "Resource": [ "*" ] }, { "Sid": "KMSDescribeKey", "Effect": "Allow", "Action": [ "kms:DescribeKey" ], "Resource": [ "*" ], "Condition": { "StringEquals": { "aws:ResourceTag/red-hat": "true" } } }, { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteSecurityGroup", "ec2:DeleteVolume", "ec2:DetachVolume", "ec2:ModifyInstanceAttribute", "ec2:ModifyVolume", "ec2:RevokeSecurityGroupIngress", "elasticloadbalancing:AddTags", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerPolicy", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteLoadBalancerListeners", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:DetachLoadBalancerFromSubnets", "elasticloadbalancing:ModifyListener", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" ], "Resource": "*" } ] } Table 3.5. ROSA compute node role, policy, and policy files Resource Description ManagedOpenShift-Worker-Role An IAM role used by the ROSA compute instances. ManagedOpenShift-Worker-Role-Policy An IAM policy that provides the ROSA compute instances with the permissions required to manage their components. Example 3.5. sts_instance_worker_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] } Example 3.6. sts_instance_worker_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeRegions" ], "Resource": "*" } ] } Table 3.6. ROSA support role, policy, and policy files Resource Description ManagedOpenShift-Support-Role An IAM role used by the Red Hat Site Reliability Engineering (SRE) support team. ManagedOpenShift-Support-Role-Policy An IAM policy that provides the Red Hat SRE support team with the permissions required to support ROSA clusters. Example 3.7. sts_support_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::%{aws_account_id}:role/RH-Technical-Support-Access" ] }, "Action": [ "sts:AssumeRole" ] } ] } Example 3.8. sts_support_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudtrail:DescribeTrails", "cloudtrail:LookupEvents", "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "ec2-instance-connect:SendSerialConsoleSSHPublicKey", "ec2:CopySnapshot", "ec2:CreateNetworkInsightsPath", "ec2:CreateSnapshot", "ec2:CreateSnapshots", "ec2:CreateTags", "ec2:DeleteNetworkInsightsAnalysis", "ec2:DeleteNetworkInsightsPath", "ec2:DeleteTags", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAddressesAttribute", "ec2:DescribeAggregateIdFormat", "ec2:DescribeAvailabilityZones", "ec2:DescribeByoipCidrs", "ec2:DescribeCapacityReservations", "ec2:DescribeCarrierGateways", "ec2:DescribeClassicLinkInstances", "ec2:DescribeClientVpnAuthorizationRules", "ec2:DescribeClientVpnConnections", "ec2:DescribeClientVpnEndpoints", "ec2:DescribeClientVpnRoutes", "ec2:DescribeClientVpnTargetNetworks", "ec2:DescribeCoipPools", "ec2:DescribeCustomerGateways", "ec2:DescribeDhcpOptions", "ec2:DescribeEgressOnlyInternetGateways", "ec2:DescribeIamInstanceProfileAssociations", "ec2:DescribeIdentityIdFormat", "ec2:DescribeIdFormat", "ec2:DescribeImageAttribute", "ec2:DescribeImages", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeInstanceTypes", "ec2:DescribeInternetGateways", "ec2:DescribeIpv6Pools", "ec2:DescribeKeyPairs", "ec2:DescribeLaunchTemplates", "ec2:DescribeLocalGatewayRouteTables", "ec2:DescribeLocalGatewayRouteTableVirtualInterfaceGroupAssociations", "ec2:DescribeLocalGatewayRouteTableVpcAssociations", "ec2:DescribeLocalGateways", "ec2:DescribeLocalGatewayVirtualInterfaceGroups", "ec2:DescribeLocalGatewayVirtualInterfaces", "ec2:DescribeManagedPrefixLists", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInsightsAnalyses", "ec2:DescribeNetworkInsightsPaths", "ec2:DescribeNetworkInterfaces", "ec2:DescribePlacementGroups", "ec2:DescribePrefixLists", "ec2:DescribePrincipalIdFormat", "ec2:DescribePublicIpv4Pools", "ec2:DescribeRegions", "ec2:DescribeReservedInstances", "ec2:DescribeRouteTables", "ec2:DescribeScheduledInstances", "ec2:DescribeSecurityGroupReferences", "ec2:DescribeSecurityGroupRules", "ec2:DescribeSecurityGroups", "ec2:DescribeSnapshotAttribute", "ec2:DescribeSnapshots", "ec2:DescribeSpotFleetInstances", "ec2:DescribeStaleSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeTransitGatewayAttachments", "ec2:DescribeTransitGatewayConnectPeers", "ec2:DescribeTransitGatewayConnects", "ec2:DescribeTransitGatewayMulticastDomains", "ec2:DescribeTransitGatewayPeeringAttachments", "ec2:DescribeTransitGatewayRouteTables", "ec2:DescribeTransitGateways", "ec2:DescribeTransitGatewayVpcAttachments", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumes", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpointConnectionNotifications", "ec2:DescribeVpcEndpointConnections", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServicePermissions", "ec2:DescribeVpcEndpointServices", "ec2:DescribeVpcPeeringConnections", "ec2:DescribeVpcs", "ec2:DescribeVpnConnections", "ec2:DescribeVpnGateways", "ec2:GetAssociatedIpv6PoolCidrs", "ec2:GetConsoleOutput", "ec2:GetManagedPrefixListEntries", "ec2:GetSerialConsoleAccessStatus", "ec2:GetTransitGatewayAttachmentPropagations", "ec2:GetTransitGatewayMulticastDomainAssociations", "ec2:GetTransitGatewayPrefixListReferences", "ec2:GetTransitGatewayRouteTableAssociations", "ec2:GetTransitGatewayRouteTablePropagations", "ec2:ModifyInstanceAttribute", "ec2:RebootInstances", "ec2:RunInstances", "ec2:SearchLocalGatewayRoutes", "ec2:SearchTransitGatewayMulticastGroups", "ec2:SearchTransitGatewayRoutes", "ec2:StartInstances", "ec2:StartNetworkInsightsAnalysis", "ec2:StopInstances", "ec2:TerminateInstances", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:DescribeAccountLimits", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeListenerCertificates", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeLoadBalancerPolicyTypes", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeRules", "elasticloadbalancing:DescribeSSLPolicies", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "iam:GetRole", "iam:ListRoles", "kms:CreateGrant", "route53:GetHostedZone", "route53:GetHostedZoneCount", "route53:ListHostedZones", "route53:ListHostedZonesByName", "route53:ListResourceRecordSets", "s3:GetBucketTagging", "s3:GetObjectAcl", "s3:GetObjectTagging", "s3:ListAllMyBuckets", "sts:DecodeAuthorizationMessage", "tiros:CreateQuery", "tiros:GetQueryAnswer", "tiros:GetQueryExplanation" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::managed-velero*", "arn:aws:s3:::*image-registry*" ] } ] } Table 3.7. ROSA OCM role and policy file Resource Description ManagedOpenShift-OCM-Role You use this IAM role to create and maintain ROSA clusters in OpenShift Cluster Manager. Example 3.9. sts_ocm_role_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer" ] }, "Action": [ "sts:AssumeRole" ], "Condition": {"StringEquals": {"sts:ExternalId": "%{ocm_organization_id}"}} } ] } Table 3.8. ROSA user role and policy file Resource Description ManagedOpenShift-User-<OCM_user>-Role An IAM role used by Red Hat to verify the customer's AWS identity. Example 3.10. sts_user_role_trust_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer" ] }, "Action": [ "sts:AssumeRole" ] } ] } Example CLI output for policies attached to a role When a policy is attached to a role, the ROSA CLI displays a confirmation output. The output depends on the type of policy. If the policy is a trust policy, the ROSA CLI outputs the role name and the content of the policy. For the target role with policy attached, ROSA CLI outputs the role name and the console URL of the target role. Target role with policy attached example output I: Attached trust policy to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)': ****************** If the attached policy is a trust policy, the ROSA CLI outputs the content of this policy. Trust policy example output I: Attached trust policy to role 'test-Support-Role': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": {"AWS": ["arn:aws:iam::000000000000:role/RH-Technical-Support-00000000"]}}]} If the policy is a permission policy, the ROSA CLI outputs the name and public link of this policy or the ARN depending on whether or not the policy is an AWS managed policy or customer-managed policy. If the attached policy is an AWS managed policy, the ROSA CLI outputs the name and public link of this policy and the role it is attached to. AWS managed policy example output I: Attached policy 'ROSASRESupportPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy)' to role 'test-HCP-ROSA-Support-Role(https://console.aws.amazon.com/iam/home?#/roles/test-HCP-ROSA-Support-Role)' If the attached policy is an AWS managed policy, the ROSA CLI outputs the name and public link of this policy and the role it is attached to. Customer-managed policy example output I: Attached policy 'arn:aws:iam::000000000000:policy/testrole-Worker-Role-Policy' to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)' Table 3.9. ROSA Ingress Operator IAM policy and policy file Resource Description ManagedOpenShift-openshift-ingress-operator-cloud-credentials An IAM policy that provides the ROSA Ingress Operator with the permissions required to manage external access to a cluster. Example 3.11. openshift_ingress_operator_cloud_credentials_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticloadbalancing:DescribeLoadBalancers", "route53:ListHostedZones", "route53:ListTagsForResources", "route53:ChangeResourceRecordSets", "tag:GetResources" ], "Resource": "*" } ] } Table 3.10. ROSA back-end storage IAM policy and policy file Resource Description ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credentials An IAM policy required by ROSA to manage back-end storage through the Container Storage Interface (CSI). Example 3.12. openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeAvailabilityZones", "ec2:DescribeInstances", "ec2:DescribeSnapshots", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVolumesModifications", "ec2:DetachVolume", "ec2:EnableFastSnapshotRestores", "ec2:ModifyVolume" ], "Resource": "*" } ] } Table 3.11. ROSA Machine Config Operator policy and policy file Resource Description ManagedOpenShift-openshift-machine-api-aws-cloud-credentials An IAM policy that provides the ROSA Machine Config Operator with the permissions required to perform core cluster functionality. Example 3.13. openshift_machine_api_aws_cloud_credentials_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeInstanceTypes", "ec2:DescribeSecurityGroups", "ec2:DescribeRegions", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:RunInstances", "ec2:TerminateInstances", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", "iam:PassRole", "iam:CreateServiceLinkedRole" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKey", "kms:GenerateDataKeyWithoutPlainText", "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:RevokeGrant", "kms:CreateGrant", "kms:ListGrants" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": true } } } ] } Table 3.12. ROSA Cloud Credential Operator policy and policy file Resource Description ManagedOpenShift-openshift-cloud-credential-operator-cloud-credentials An IAM policy that provides the ROSA Cloud Credential Operator with the permissions required to manage cloud provider credentials. Example 3.14. openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:GetUser", "iam:GetUserPolicy", "iam:ListAccessKeys" ], "Resource": "*" } ] } Table 3.13. ROSA Image Registry Operator policy and policy file Resource Description ManagedOpenShift-openshift-image-registry-installer-cloud-credentials An IAM policy that provides the ROSA Image Registry Operator with the permissions required to manage the OpenShift image registry storage in AWS S3 for a cluster. Example 3.15. openshift_image_registry_installer_cloud_credentials_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutBucketPublicAccessBlock", "s3:GetBucketPublicAccessBlock", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": "*" } ] } Additional resources Red Hat OpenShift Service on AWS update life cycle 3.2.2. Account-wide IAM role and policy AWS CLI reference This section lists the aws CLI commands that the rosa command generates in the terminal. You can run the command in either manual or automatic mode. Using manual mode for account role creation The manual role creation mode generates the aws commands for you to review and run. The following command starts that process, where <openshift_version> refers to your version of Red Hat OpenShift Service on AWS (ROSA), such as 4 . USD rosa create account-roles --mode manual Note The provided command examples include the ManagedOpenShift prefix. The ManagedOpenShift prefix is the default value, if you do not specify a custom prefix by using the --prefix option. Command output aws iam create-role \ --role-name ManagedOpenShift-Installer-Role \ --assume-role-policy-document file://sts_installer_trust_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=installer aws iam put-role-policy \ --role-name ManagedOpenShift-Installer-Role \ --policy-name ManagedOpenShift-Installer-Role-Policy \ --policy-document file://sts_installer_permission_policy.json aws iam create-role \ --role-name ManagedOpenShift-ControlPlane-Role \ --assume-role-policy-document file://sts_instance_controlplane_trust_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_controlplane aws iam put-role-policy \ --role-name ManagedOpenShift-ControlPlane-Role \ --policy-name ManagedOpenShift-ControlPlane-Role-Policy \ --policy-document file://sts_instance_controlplane_permission_policy.json aws iam create-role \ --role-name ManagedOpenShift-Worker-Role \ --assume-role-policy-document file://sts_instance_worker_trust_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy \ --role-name ManagedOpenShift-Worker-Role \ --policy-name ManagedOpenShift-Worker-Role-Policy \ --policy-document file://sts_instance_worker_permission_policy.json aws iam create-role \ --role-name ManagedOpenShift-Support-Role \ --assume-role-policy-document file://sts_support_trust_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=support aws iam put-role-policy \ --role-name ManagedOpenShift-Support-Role \ --policy-name ManagedOpenShift-Support-Role-Policy \ --policy-document file://sts_support_permission_policy.json aws iam create-policy \ --policy-name ManagedOpenShift-openshift-ingress-operator-cloud-credentials \ --policy-document file://openshift_ingress_operator_cloud_credentials_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam create-policy \ --policy-name ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent \ --policy-document file://openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam create-policy \ --policy-name ManagedOpenShift-openshift-machine-api-aws-cloud-credentials \ --policy-document file://openshift_machine_api_aws_cloud_credentials_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam create-policy \ --policy-name ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede \ --policy-document file://openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam create-policy \ --policy-name ManagedOpenShift-openshift-image-registry-installer-cloud-creden \ --policy-document file://openshift_image_registry_installer_cloud_credentials_policy.json \ --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials Using auto mode for role creation When you add the --mode auto argument, the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , creates your roles and policies. The following command starts that process: USD rosa create account-roles --mode auto Note The provided command examples include the ManagedOpenShift prefix. The ManagedOpenShift prefix is the default value, if you do not specify a custom prefix by using the --prefix option. Command output I: Creating roles using 'arn:aws:iam::<ARN>:user/<UserID>' ? Create the 'ManagedOpenShift-Installer-Role' role? Yes I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Installer-Role' ? Create the 'ManagedOpenShift-ControlPlane-Role' role? Yes I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-ControlPlane-Role' ? Create the 'ManagedOpenShift-Worker-Role' role? Yes I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Worker-Role' ? Create the 'ManagedOpenShift-Support-Role' role? Yes I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Support-Role' ? Create the operator policies? Yes I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-network-config-controller-cloud' I: To create a cluster with these roles, run the following command: rosa create cluster --sts 3.3. Permission boundaries for the installer role You can apply a policy as a permissions boundary on an installer role. You can use an AWS-managed policy or a customer-managed policy to set the boundary for an Amazon Web Services (AWS) Identity and Access Management (IAM) entity (user or role). The combination of policy and boundary policy limits the maximum permissions for the user or role. ROSA includes a set of three prepared permission boundary policy files, with which you can restrict permissions for the installer role since changing the installer policy itself is not supported. Note This feature is only supported on Red Hat OpenShift Service on AWS (classic architecture) clusters. The permission boundary policy files are as follows: The Core boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to install an Red Hat OpenShift Service on AWS cluster. The installer does not have permissions to create a virtual private cloud (VPC) or PrivateLink (PL). A VPC needs to be provided. The VPC boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to create/manage the VPC. It does not include permissions for PL or core installation. If you need to install a cluster with enough permissions for the installer to install the cluster and create/manage the VPC, but you do not need to set up PL, then use the core and VPC boundary files together with the installer role. The PrivateLink (PL) boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to create the AWS PL with a cluster. It does not include permissions for VPC or core installation. Provide a pre-created VPC for all PL clusters during installation. When using the permission boundary policy files, the following combinations apply: No permission boundary policies means that the full installer policy permissions apply to your cluster. Core only sets the most restricted permissions for the installer role. The VPC and PL permissions are not included in the Core only boundary policy. Installer cannot create or manage the VPC or PL. You must have a customer-provided VPC, and PrivateLink (PL) is not available. Core + VPC sets the core and VPC permissions for the installer role. Installer cannot create or manage the PL. Assumes you are not using custom/BYO-VPC. Assumes the installer will create and manage the VPC. Core + PrivateLink (PL) means the installer can provision the PL infrastructure. You must have a customer-provided VPC. This is for a private cluster with PL. This example procedure is applicable for an installer role and policy with the most restriction of permissions, using only the core installer permission boundary policy for ROSA. You can complete this with the AWS console or the AWS CLI. This example uses the AWS CLI and the following policy: Example 3.16. sts_installer_core_permission_boundary_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CopyImage", "ec2:CreateNetworkInterface", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteNetworkInterface", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeregisterImage", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstanceCreditSpecifications", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeInstanceTypes", "ec2:DescribeInternetGateways", "ec2:DescribeKeyPairs", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInterfaces", "ec2:DescribePrefixLists", "ec2:DescribeRegions", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSecurityGroupRules", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcs", "ec2:GetConsoleOutput", "ec2:GetEbsDefaultKmsKeyId", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ReleaseAddress", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "elasticloadbalancing:SetSecurityGroups", "iam:AddRoleToInstanceProfile", "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetInstanceProfile", "iam:TagInstanceProfile", "iam:GetRole", "iam:GetRolePolicy", "iam:GetUser", "iam:ListAttachedRolePolicies", "iam:ListInstanceProfiles", "iam:ListInstanceProfilesForRole", "iam:ListRolePolicies", "iam:ListRoles", "iam:ListUserPolicies", "iam:ListUsers", "iam:PassRole", "iam:RemoveRoleFromInstanceProfile", "iam:SimulatePrincipalPolicy", "iam:TagRole", "iam:UntagRole", "route53:ChangeResourceRecordSets", "route53:ChangeTagsForResource", "route53:CreateHostedZone", "route53:DeleteHostedZone", "route53:GetAccountLimit", "route53:GetChange", "route53:GetHostedZone", "route53:ListHostedZones", "route53:ListHostedZonesByName", "route53:ListResourceRecordSets", "route53:ListTagsForResource", "route53:UpdateHostedZoneComment", "s3:CreateBucket", "s3:DeleteBucket", "s3:DeleteObject", "s3:GetAccelerateConfiguration", "s3:GetBucketAcl", "s3:GetBucketCORS", "s3:GetBucketLocation", "s3:GetBucketLogging", "s3:GetBucketObjectLockConfiguration", "s3:GetBucketPolicy", "s3:GetBucketRequestPayment", "s3:GetBucketTagging", "s3:GetBucketVersioning", "s3:GetBucketWebsite", "s3:GetEncryptionConfiguration", "s3:GetLifecycleConfiguration", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetReplicationConfiguration", "s3:ListBucket", "s3:ListBucketVersions", "s3:PutBucketAcl", "s3:PutBucketPolicy", "s3:PutBucketTagging", "s3:PutEncryptionConfiguration", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectTagging", "servicequotas:GetServiceQuota", "servicequotas:ListAWSDefaultServiceQuotas", "sts:AssumeRole", "sts:AssumeRoleWithWebIdentity", "sts:GetCallerIdentity", "tag:GetResources", "tag:UntagResources", "kms:DescribeKey", "cloudwatch:GetMetricData", "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:CreateVpcEndpoint", "ec2:DeleteVpcEndpoints", "ec2:CreateVpcEndpointServiceConfiguration", "ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServicePermissions", "ec2:DescribeVpcEndpointServices", "ec2:ModifyVpcEndpointServicePermissions" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/red-hat-managed": "true" } } } ] } Important To use the permission boundaries, you will need to prepare the permission boundary policy and add it to your relevant installer role in AWS IAM. While the ROSA ( rosa ) CLI offers a permission boundary function, it applies to all roles and not just the installer role, which means it does not work with the provided permission boundary policies (which are only for the installer role). Prerequisites You have an AWS account. You have the permissions required to administer AWS roles and policies. You have installed and configured the latest AWS ( aws ) and ROSA ( rosa ) CLIs on your workstation. You have already prepared your ROSA account-wide roles, includes the installer role, and the corresponding policies. If these do not exist in your AWS account, see "Creating the account-wide STS roles and policies" in Additional resources . Procedure Prepare the policy file by entering the following command in the rosa CLI: USD curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.18/sts_installer_core_permission_boundary_policy.json Create the policy in AWS and gather its Amazon Resource Name (ARN) by entering the following command: USD aws iam create-policy \ --policy-name rosa-core-permissions-boundary-policy \ --policy-document file://./rosa-installer-core.json \ --description "ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink" Example output { "Policy": { "PolicyName": "rosa-core-permissions-boundary-policy", "PolicyId": "<Policy ID>", "Arn": "arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "<CreateDate>", "UpdateDate": "<UpdateDate>" } } Add the permission boundary policy to the installer role you want to restrict by entering the following command: USD aws iam put-role-permissions-boundary \ --role-name ManagedOpenShift-Installer-Role \ --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Display the installer role to validate attached policies (including permissions boundary) by entering the following command in the rosa CLI: USD aws iam get-role --role-name ManagedOpenShift-Installer-Role \ --output text | grep PERMISSIONSBOUNDARY Example output PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Policy For more examples of PL and VPC permission boundary policies see: Example 3.17. sts_installer_privatelink_permission_boundary_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:ModifyVpcEndpointServiceConfiguration", "route53:ListHostedZonesByVPC", "route53:CreateVPCAssociationAuthorization", "route53:AssociateVPCWithHostedZone", "route53:DeleteVPCAssociationAuthorization", "route53:DisassociateVPCFromHostedZone", "route53:ChangeResourceRecordSets" ], "Resource": "*" } ] } Example 3.18. sts_installer_vpc_permission_boundary_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:CreateDhcpOptions", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRouteTable", "ec2:CreateSubnet", "ec2:CreateVpc", "ec2:DeleteDhcpOptions", "ec2:DeleteInternetGateway", "ec2:DeleteNatGateway", "ec2:DeleteRouteTable", "ec2:DeleteSubnet", "ec2:DeleteVpc", "ec2:DetachInternetGateway", "ec2:DisassociateRouteTable", "ec2:ModifySubnetAttribute", "ec2:ModifyVpcAttribute", "ec2:ReplaceRouteTableAssociation" ], "Resource": "*" } ] } Additional resources Permissions boundaries for IAM entities (AWS documentation) Creating the account-wide STS roles and policies 3.4. Cluster-specific Operator IAM role reference This section provides details about the Operator IAM roles that are required for Red Hat OpenShift Service on AWS (ROSA) deployments that use STS. The cluster Operators use the Operator roles to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud provider credentials, and external access to a cluster. When you create the Operator roles, the account-wide Operator policies for the matching cluster version are attached to the roles. The Operator policies are tagged with the Operator and version they are compatible with. The correct policy for an Operator role is determined by using the tags. Note If more than one matching policy is available in your account for an Operator role, an interactive list of options is provided when you create the Operator. Table 3.14. ROSA cluster-specific Operator roles Resource Description <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credentials An IAM role required by ROSA to manage back-end storage through the Container Storage Interface (CSI). <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials An IAM role required by the ROSA Machine Config Operator to perform core cluster functionality. <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-credentials An IAM role required by the ROSA Cloud Credential Operator to manage cloud provider credentials. <cluster_name>-<hash>-openshift-cloud-network-config-controller-credentials An IAM role required by the cloud network config controller to manage cloud network configuration for a cluster. <cluster_name>-<hash>-openshift-image-registry-installer-cloud-credentials An IAM role required by the ROSA Image Registry Operator to manage the OpenShift image registry storage in AWS S3 for a cluster. <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials An IAM role required by the ROSA Ingress Operator to manage external access to a cluster. <cluster_name>-<hash>-openshift-cloud-network-config-controller-cloud-credentials An IAM role required by the cloud network config controller to manage cloud network credentials for a cluster. 3.4.1. Operator IAM role AWS CLI reference This section lists the aws CLI commands that are shown in the terminal when you run the following rosa command using manual mode: USD rosa create operator-roles --mode manual --cluster <cluster_name> Note When using manual mode, the aws commands are printed to the terminal for your review. After reviewing the aws commands, you must run them manually. Alternatively, you can specify --mode auto with the rosa create command to run the aws commands immediately. Command output aws iam create-role \ --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent \ --assume-role-policy-document file://operator_cluster_csi_drivers_ebs_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam attach-role-policy \ --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent \ --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent aws iam create-role \ --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials \ --assume-role-policy-document file://operator_machine_api_aws_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam attach-role-policy \ --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials \ --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials aws iam create-role \ --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede \ --assume-role-policy-document file://operator_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json \ --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam attach-role-policy \ --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede \ --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede aws iam create-role \ --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden \ --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy \ --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden \ --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden aws iam create-role \ --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials \ --assume-role-policy-document file://operator_ingress_operator_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam attach-role-policy \ --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials \ --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials Note The command examples provided in the table include Operator roles that use the ManagedOpenShift prefix. If you defined a custom prefix when you created the account-wide roles and policies, including the Operator policies, you must reference it by using the --prefix <prefix_name> option when you create the Operator roles. 3.4.2. About custom Operator IAM role prefixes Each Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS) requires cluster-specific Operator IAM roles. By default, the Operator role names are prefixed with the cluster name and a random 4-digit hash. For example, the Cloud Credential Operator IAM role for a cluster named mycluster has the default name mycluster-<hash>-openshift-cloud-credential-operator-cloud-credentials , where <hash> is a random 4-digit string. This default naming convention enables you to easily identify the Operator IAM roles for a cluster in your AWS account. When you create the Operator roles for a cluster, you can optionally specify a custom prefix to use instead of <cluster_name>-<hash> . By using a custom prefix, you can prepend logical identifiers to your Operator role names to meet the requirements of your environment. For example, you might prefix the cluster name and the environment type, such as mycluster-dev . In that example, the Cloud Credential Operator role name with the custom prefix is mycluster-dev-openshift-cloud-credential-operator-cloud-credenti . Note The role names are truncated to 64 characters. Additional resources Creating a cluster with customizations using the CLI Creating a cluster with customizations by using OpenShift Cluster Manager 3.5. Open ID Connect (OIDC) requirements for Operator authentication For ROSA installations that use STS, you must create a cluster-specific OIDC provider that is used by the cluster Operators to authenticate or create your own OIDC configuration for your own OIDC provider. 3.5.1. Creating an OIDC provider using the CLI You can create an OIDC provider that is hosted in your AWS account with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Prerequisites You have installed the latest version of the ROSA CLI. Procedure To create an OIDC provider, by using an unregistered or a registered OIDC configuration. Unregistered OIDC configurations require you to create the OIDC provider through the cluster. Run the following to create the OIDC provider: USD rosa create oidc-provider --mode manual --cluster <cluster_name> Note When using manual mode, the aws command is printed to the terminal for your review. After reviewing the aws command, you must run it manually. Alternatively, you can specify --mode auto with the rosa create command to run the aws command immediately. Command output aws iam create-open-id-connect-provider \ --url https://oidc.op1.openshiftapps.com/<oidc_config_id> \ 1 --client-id-list openshift sts.<aws_region>.amazonaws.com \ --thumbprint-list <thumbprint> 2 1 The URL used to reach the OpenID Connect (OIDC) identity provider after the cluster is created. 2 The thumbprint is generated automatically when you run the rosa create oidc-provider command. For more information about using thumbprints with AWS Identity and Access Management (IAM) OIDC identity providers, see the AWS documentation . Registered OIDC configurations use an OIDC configuration ID. Run the following command with your OIDC configuration ID: USD rosa create oidc-provider --oidc-config-id <oidc_config_id> --mode auto -y Command output I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/241rh9ql5gpu99d7leokhvkp8icnalpf' 3.5.2. Creating an OpenID Connect Configuration When using a cluster hosted by Red Hat, you can create a managed or unmanaged OpenID Connect (OIDC) configuration by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . A managed OIDC configuration is stored within Red Hat's AWS account, while a generated unmanaged OIDC configuration is stored within your AWS account. The OIDC configuration is registered to be used with OpenShift Cluster Manager. When creating an unmanaged OIDC configuration, the CLI provides the private key for you. Creating an OpenID Connect configuration When using a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration prior to creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager. Prerequisites You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS. You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your installation host. Procedure To create your OIDC configuration alongside the AWS resources, run the following command: USD rosa create oidc-config --mode=auto --yes This command returns the following information. Example output ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b' When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto , otherwise you must determine these values based on aws CLI output for --mode manual . Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable: USD export OIDC_ID=<oidc_config_id> 1 1 In the example output above, the OIDC configuration ID is 13cdr6b. View the value of the variable by running the following command: USD echo USDOIDC_ID Example output 13cdr6b Verification You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command: USD rosa list oidc-config Example output ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN Parameter options for creating your own OpenID Connect configuration The following options may be added to the rosa create oidc-config command. All of these parameters are optional. Running the rosa create oidc-config command without parameters creates an unmanaged OIDC configuration. Note You are required to register the unmanaged OIDC configuration by posting a request to /oidc_configs through OpenShift Cluster Manager. You receive an ID in the response. Use this ID to create a cluster. raw-files Allows you to provide raw files for the private RSA key. This key is named rosa-private-key-oidc-<random_label_of_length_4>.key . You also receive a discovery document, named discovery-document-oidc-<random_label_of_length_4>.json , and a JSON Web Key Set, named jwks-oidc-<random_label_of_length_4>.json . You use these files to set up the endpoint. This endpoint responds to /.well-known/openid-configuration with the discovery document and on keys.json with the JSON Web Key Set. The private key is stored in Amazon Web Services (AWS) Secrets Manager Service (SMS) as plaintext. Example USD rosa create oidc-config --raw-files mode Allows you to specify the mode to create your OIDC configuration. With the manual option, you receive AWS commands that set up the OIDC configuration in an S3 bucket. This option stores the private key in the Secrets Manager. With the manual option, the OIDC Endpoint URL is the URL for the S3 bucket. You must retrieve the Secrets Manager ARN to register the OIDC configuration with OpenShift Cluster Manager. You receive the same OIDC configuration and AWS resources as the manual mode when using the auto option. A significant difference between the two options is that when using the auto option, ROSA calls AWS, so you do not need to take any further actions. The OIDC Endpoint URL is the URL for the S3 bucket. The CLI retrieves the Secrets Manager ARN, registers the OIDC configuration with OpenShift Cluster Manager, and reports the second rosa command that the user can run to continue with the creation of the STS cluster. Example USD rosa create oidc-config --mode=<auto|manual> managed Creates an OIDC configuration that is hosted under Red Hat's AWS account. This command creates a private key that responds directly with an OIDC Config ID for you to use when creating the STS cluster. Example USD rosa create oidc-config --managed Example output W: For a managed OIDC Config only auto mode is supported. However, you may choose the provider creation mode ? OIDC Provider creation mode: auto I: Setting up managed OIDC configuration I: Please run the following command to create a cluster with this oidc config rosa create cluster --sts --oidc-config-id 233jnu62i9aphpucsj9kueqlkr1vcgra I: Creating OIDC provider using 'arn:aws:iam::242819244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::242819244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/233jnu62i9aphpucsj9kueqlkr1vcgra' 3.6. Minimum set of effective permissions for service control policies (SCP) Service control policies (SCP) are a type of organization policy that manages permissions within your organization. SCPs ensure that accounts within your organization stay within your defined access control guidelines. These policies are maintained in AWS Organizations and control the services that are available within the attached AWS accounts. SCP management is the responsibility of the customer. Note When using AWS Security Token Service (STS), you must ensure that the service control policy does not block the following resources: ec2:{} iam:{} tag:* Verify that your service control policy (SCP) does not restrict any of these required permissions. Service Actions Effect Required Amazon EC2 All Allow Amazon EC2 Auto Scaling All Allow Amazon S3 All Allow Identity And Access Management All Allow Elastic Load Balancing All Allow Elastic Load Balancing V2 All Allow Amazon CloudWatch All Allow Amazon CloudWatch Events All Allow Amazon CloudWatch Logs All Allow AWS EC2 Instance Connect SendSerialConsoleSSHPublicKey Allow AWS Support All Allow AWS Key Management Service All Allow AWS Security Token Service All Allow AWS Tiro CreateQuery GetQueryAnswer GetQueryExplanation Allow AWS Marketplace Subscribe Unsubscribe View Subscriptions Allow AWS Resource Tagging All Allow AWS Route53 DNS All Allow AWS Service Quotas ListServices GetRequestedServiceQuotaChange GetServiceQuota RequestServiceQuotaIncrease ListServiceQuotas Allow Optional AWS Billing ViewAccount Viewbilling ViewUsage Allow AWS Cost and Usage Report All Allow AWS Cost Explorer Services All Allow Additional resources Service control policies SCP effects on permissions 3.7. Customer-managed policies Red Hat OpenShift Service on AWS (ROSA) users are able to attach customer-managed policies to the IAM roles required to run and maintain ROSA clusters. This capability is not uncommon with AWS IAM roles. The ability to attach these policies to ROSA-specific IAM roles extends a ROSA cluster's permission capabilities; for example, as a way to allow cluster components to access additional AWS resources that are otherwise not part of the ROSA-specific IAM policies. To ensure that any critical customer applications that rely on customer-managed policies are not modified in any way during cluster or role upgrades, ROSA utilizes the ListAttachedRolesPolicies permission to retrieve the list of permission policies from roles and the ListRolePolicies permission to retrieve the list of policies from ROSA-specific roles. This information ensures that customer-managed policies are not impacted during cluster events, and allows Red Hat SREs to monitor both ROSA and customer-managed policies attached to ROSA-specific IAM roles, enhancing their ability to troubleshoot any cluster issues more effectively. Warning Attaching permission boundary policies to IAM roles that restrict ROSA-specific policies is not supported, as these policies could interrupt the functionality of the basic permissions necessary to successfully run and maintain your ROSA cluster. There are prepared permissions boundary policies for the ROSA (classic architecture) installer role. See the Additional resources section for more information. Additional resources Permission boundaries for the installer role Permissions boundaries for IAM entities
[ "rosa create ocm-role", "rosa create ocm-role --admin", "I: Creating ocm role ? Role prefix: ManagedOpenShift 1 ? Enable admin capabilities for the OCM role (optional): No 2 ? Permissions boundary ARN (optional): 3 ? Role Path (optional): 4 ? Role creation mode: auto 5 I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>' ? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6 I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' I: Linking OCM role ? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7 ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8 I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CopyImage\", \"ec2:CreateDhcpOptions\", \"ec2:CreateInternetGateway\", \"ec2:CreateNatGateway\", \"ec2:CreateNetworkInterface\", \"ec2:CreateRoute\", \"ec2:CreateRouteTable\", \"ec2:CreateSecurityGroup\", \"ec2:CreateSubnet\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateVpc\", \"ec2:CreateVpcEndpoint\", \"ec2:DeleteDhcpOptions\", \"ec2:DeleteInternetGateway\", \"ec2:DeleteNatGateway\", \"ec2:DeleteNetworkInterface\", \"ec2:DeleteRoute\", \"ec2:DeleteRouteTable\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteSnapshot\", \"ec2:DeleteSubnet\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DeleteVpc\", \"ec2:DeleteVpcEndpoints\", \"ec2:DeregisterImage\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstanceCreditSpecifications\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePrefixLists\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstancesOfferings\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcs\", \"ec2:DetachInternetGateway\", \"ec2:DisassociateRouteTable\", \"ec2:GetConsoleOutput\", \"ec2:GetEbsDefaultKmsKeyId\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyNetworkInterfaceAttribute\", \"ec2:ModifySubnetAttribute\", \"ec2:ModifyVpcAttribute\", \"ec2:ReleaseAddress\", \"ec2:ReplaceRouteTableAssociation\", \"ec2:RevokeSecurityGroupEgress\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:RunInstances\", \"ec2:StartInstances\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DescribeAccountLimits\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\", \"elasticloadbalancing:SetSecurityGroups\", \"iam:AddRoleToInstanceProfile\", \"iam:CreateInstanceProfile\", \"iam:DeleteInstanceProfile\", \"iam:GetInstanceProfile\", \"iam:TagInstanceProfile\", \"iam:GetRole\", \"iam:GetRolePolicy\", \"iam:GetUser\", \"iam:ListAttachedRolePolicies\", \"iam:ListInstanceProfiles\", \"iam:ListInstanceProfilesForRole\", \"iam:ListRolePolicies\", \"iam:ListRoles\", \"iam:ListUserPolicies\", \"iam:ListUsers\", \"iam:PassRole\", \"iam:RemoveRoleFromInstanceProfile\", \"iam:SimulatePrincipalPolicy\", \"iam:TagRole\", \"iam:UntagRole\", \"route53:ChangeResourceRecordSets\", \"route53:ChangeTagsForResource\", \"route53:CreateHostedZone\", \"route53:DeleteHostedZone\", \"route53:GetAccountLimit\", \"route53:GetChange\", \"route53:GetHostedZone\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"route53:ListTagsForResource\", \"route53:UpdateHostedZoneComment\", \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:DeleteObject\", \"s3:DeleteObjectVersion\", \"s3:GetAccelerateConfiguration\", \"s3:GetBucketAcl\", \"s3:GetBucketCORS\", \"s3:GetBucketLocation\", \"s3:GetBucketLogging\", \"s3:GetBucketObjectLockConfiguration\", \"s3:GetBucketPolicy\", \"s3:GetBucketRequestPayment\", \"s3:GetBucketTagging\", \"s3:GetBucketVersioning\", \"s3:GetBucketWebsite\", \"s3:GetEncryptionConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetObject\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:GetObjectVersion\", \"s3:GetReplicationConfiguration\", \"s3:ListBucket\", \"s3:ListBucketVersions\", \"s3:PutBucketAcl\", \"s3:PutBucketPolicy\", \"s3:PutBucketTagging\", \"s3:PutBucketVersioning\", \"s3:PutEncryptionConfiguration\", \"s3:PutObject\", \"s3:PutObjectAcl\", \"s3:PutObjectTagging\", \"servicequotas:GetServiceQuota\", \"servicequotas:ListAWSDefaultServiceQuotas\", \"sts:AssumeRole\", \"sts:AssumeRoleWithWebIdentity\", \"sts:GetCallerIdentity\", \"tag:GetResources\", \"tag:UntagResources\", \"ec2:CreateVpcEndpointServiceConfiguration\", \"ec2:DeleteVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:ModifyVpcEndpointServicePermissions\", \"kms:DescribeKey\", \"cloudwatch:GetMetricData\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\" ], \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat-managed\": \"true\" } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": [ \"ec2.amazonaws.com\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ReadPermissions\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeInstances\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:DescribeLoadBalancerPolicies\" ], \"Resource\": [ \"*\" ] }, { \"Sid\": \"KMSDescribeKey\", \"Effect\": \"Allow\", \"Action\": [ \"kms:DescribeKey\" ], \"Resource\": [ \"*\" ], \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat\": \"true\" } } }, { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVolume\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CreateSecurityGroup\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteVolume\", \"ec2:DetachVolume\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyVolume\", \"ec2:RevokeSecurityGroupIngress\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerPolicy\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:DeleteListener\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteLoadBalancerListeners\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DetachLoadBalancerFromSubnets\", \"elasticloadbalancing:ModifyListener\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": [ \"ec2.amazonaws.com\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeInstances\", \"ec2:DescribeRegions\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Technical-Support-Access\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"cloudtrail:DescribeTrails\", \"cloudtrail:LookupEvents\", \"cloudwatch:GetMetricData\", \"cloudwatch:GetMetricStatistics\", \"cloudwatch:ListMetrics\", \"ec2-instance-connect:SendSerialConsoleSSHPublicKey\", \"ec2:CopySnapshot\", \"ec2:CreateNetworkInsightsPath\", \"ec2:CreateSnapshot\", \"ec2:CreateSnapshots\", \"ec2:CreateTags\", \"ec2:DeleteNetworkInsightsAnalysis\", \"ec2:DeleteNetworkInsightsPath\", \"ec2:DeleteTags\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAddressesAttribute\", \"ec2:DescribeAggregateIdFormat\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeByoipCidrs\", \"ec2:DescribeCapacityReservations\", \"ec2:DescribeCarrierGateways\", \"ec2:DescribeClassicLinkInstances\", \"ec2:DescribeClientVpnAuthorizationRules\", \"ec2:DescribeClientVpnConnections\", \"ec2:DescribeClientVpnEndpoints\", \"ec2:DescribeClientVpnRoutes\", \"ec2:DescribeClientVpnTargetNetworks\", \"ec2:DescribeCoipPools\", \"ec2:DescribeCustomerGateways\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeEgressOnlyInternetGateways\", \"ec2:DescribeIamInstanceProfileAssociations\", \"ec2:DescribeIdentityIdFormat\", \"ec2:DescribeIdFormat\", \"ec2:DescribeImageAttribute\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeIpv6Pools\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeLaunchTemplates\", \"ec2:DescribeLocalGatewayRouteTables\", \"ec2:DescribeLocalGatewayRouteTableVirtualInterfaceGroupAssociations\", \"ec2:DescribeLocalGatewayRouteTableVpcAssociations\", \"ec2:DescribeLocalGateways\", \"ec2:DescribeLocalGatewayVirtualInterfaceGroups\", \"ec2:DescribeLocalGatewayVirtualInterfaces\", \"ec2:DescribeManagedPrefixLists\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInsightsAnalyses\", \"ec2:DescribeNetworkInsightsPaths\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePlacementGroups\", \"ec2:DescribePrefixLists\", \"ec2:DescribePrincipalIdFormat\", \"ec2:DescribePublicIpv4Pools\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstances\", \"ec2:DescribeRouteTables\", \"ec2:DescribeScheduledInstances\", \"ec2:DescribeSecurityGroupReferences\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSnapshotAttribute\", \"ec2:DescribeSnapshots\", \"ec2:DescribeSpotFleetInstances\", \"ec2:DescribeStaleSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeTransitGatewayAttachments\", \"ec2:DescribeTransitGatewayConnectPeers\", \"ec2:DescribeTransitGatewayConnects\", \"ec2:DescribeTransitGatewayMulticastDomains\", \"ec2:DescribeTransitGatewayPeeringAttachments\", \"ec2:DescribeTransitGatewayRouteTables\", \"ec2:DescribeTransitGateways\", \"ec2:DescribeTransitGatewayVpcAttachments\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpointConnectionNotifications\", \"ec2:DescribeVpcEndpointConnections\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:DescribeVpcPeeringConnections\", \"ec2:DescribeVpcs\", \"ec2:DescribeVpnConnections\", \"ec2:DescribeVpnGateways\", \"ec2:GetAssociatedIpv6PoolCidrs\", \"ec2:GetConsoleOutput\", \"ec2:GetManagedPrefixListEntries\", \"ec2:GetSerialConsoleAccessStatus\", \"ec2:GetTransitGatewayAttachmentPropagations\", \"ec2:GetTransitGatewayMulticastDomainAssociations\", \"ec2:GetTransitGatewayPrefixListReferences\", \"ec2:GetTransitGatewayRouteTableAssociations\", \"ec2:GetTransitGatewayRouteTablePropagations\", \"ec2:ModifyInstanceAttribute\", \"ec2:RebootInstances\", \"ec2:RunInstances\", \"ec2:SearchLocalGatewayRoutes\", \"ec2:SearchTransitGatewayMulticastGroups\", \"ec2:SearchTransitGatewayRoutes\", \"ec2:StartInstances\", \"ec2:StartNetworkInsightsAnalysis\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:DescribeAccountLimits\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListenerCertificates\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancerPolicies\", \"elasticloadbalancing:DescribeLoadBalancerPolicyTypes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeRules\", \"elasticloadbalancing:DescribeSSLPolicies\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"iam:GetRole\", \"iam:ListRoles\", \"kms:CreateGrant\", \"route53:GetHostedZone\", \"route53:GetHostedZoneCount\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"s3:GetBucketTagging\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:ListAllMyBuckets\", \"sts:DecodeAuthorizationMessage\", \"tiros:CreateQuery\", \"tiros:GetQueryAnswer\", \"tiros:GetQueryExplanation\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::managed-velero*\", \"arn:aws:s3:::*image-registry*\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ], \"Condition\": {\"StringEquals\": {\"sts:ExternalId\": \"%{ocm_organization_id}\"}} } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "I: Attached trust policy to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)': ******************", "I: Attached trust policy to role 'test-Support-Role': {\"Version\": \"2012-10-17\", \"Statement\": [{\"Action\": [\"sts:AssumeRole\"], \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::000000000000:role/RH-Technical-Support-00000000\"]}}]}", "I: Attached policy 'ROSASRESupportPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy)' to role 'test-HCP-ROSA-Support-Role(https://console.aws.amazon.com/iam/home?#/roles/test-HCP-ROSA-Support-Role)'", "I: Attached policy 'arn:aws:iam::000000000000:policy/testrole-Worker-Role-Policy' to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)'", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:DescribeLoadBalancers\", \"route53:ListHostedZones\", \"route53:ListTagsForResources\", \"route53:ChangeResourceRecordSets\", \"tag:GetResources\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVolume\", \"ec2:CreateSnapshot\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteSnapshot\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeInstances\", \"ec2:DescribeSnapshots\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumesModifications\", \"ec2:DetachVolume\", \"ec2:EnableFastSnapshotRestores\", \"ec2:ModifyVolume\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateTags\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstances\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeRegions\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"ec2:RunInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", \"iam:PassRole\", \"iam:CreateServiceLinkedRole\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:Decrypt\", \"kms:Encrypt\", \"kms:GenerateDataKey\", \"kms:GenerateDataKeyWithoutPlainText\", \"kms:DescribeKey\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:RevokeGrant\", \"kms:CreateGrant\", \"kms:ListGrants\" ], \"Resource\": \"*\", \"Condition\": { \"Bool\": { \"kms:GrantIsForAWSResource\": true } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"iam:GetUser\", \"iam:GetUserPolicy\", \"iam:ListAccessKeys\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutBucketPublicAccessBlock\", \"s3:GetBucketPublicAccessBlock\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": \"*\" } ] }", "rosa create account-roles --mode manual", "aws iam create-role --role-name ManagedOpenShift-Installer-Role --assume-role-policy-document file://sts_installer_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=installer aws iam put-role-policy --role-name ManagedOpenShift-Installer-Role --policy-name ManagedOpenShift-Installer-Role-Policy --policy-document file://sts_installer_permission_policy.json aws iam create-role --role-name ManagedOpenShift-ControlPlane-Role --assume-role-policy-document file://sts_instance_controlplane_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_controlplane aws iam put-role-policy --role-name ManagedOpenShift-ControlPlane-Role --policy-name ManagedOpenShift-ControlPlane-Role-Policy --policy-document file://sts_instance_controlplane_permission_policy.json aws iam create-role --role-name ManagedOpenShift-Worker-Role --assume-role-policy-document file://sts_instance_worker_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy --role-name ManagedOpenShift-Worker-Role --policy-name ManagedOpenShift-Worker-Role-Policy --policy-document file://sts_instance_worker_permission_policy.json aws iam create-role --role-name ManagedOpenShift-Support-Role --assume-role-policy-document file://sts_support_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=support aws iam put-role-policy --role-name ManagedOpenShift-Support-Role --policy-name ManagedOpenShift-Support-Role-Policy --policy-document file://sts_support_permission_policy.json aws iam create-policy --policy-name ManagedOpenShift-openshift-ingress-operator-cloud-credentials --policy-document file://openshift_ingress_operator_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent --policy-document file://openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-machine-api-aws-cloud-credentials --policy-document file://openshift_machine_api_aws_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede --policy-document file://openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam create-policy --policy-name ManagedOpenShift-openshift-image-registry-installer-cloud-creden --policy-document file://openshift_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials", "rosa create account-roles --mode auto", "I: Creating roles using 'arn:aws:iam::<ARN>:user/<UserID>' ? Create the 'ManagedOpenShift-Installer-Role' role? Yes I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Installer-Role' ? Create the 'ManagedOpenShift-ControlPlane-Role' role? Yes I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-ControlPlane-Role' ? Create the 'ManagedOpenShift-Worker-Role' role? Yes I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Worker-Role' ? Create the 'ManagedOpenShift-Support-Role' role? Yes I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Support-Role' ? Create the operator policies? Yes I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-network-config-controller-cloud' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CopyImage\", \"ec2:CreateNetworkInterface\", \"ec2:CreateSecurityGroup\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteNetworkInterface\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteSnapshot\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DeregisterImage\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstanceCreditSpecifications\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePrefixLists\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstancesOfferings\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcs\", \"ec2:GetConsoleOutput\", \"ec2:GetEbsDefaultKmsKeyId\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyNetworkInterfaceAttribute\", \"ec2:ReleaseAddress\", \"ec2:RevokeSecurityGroupEgress\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:RunInstances\", \"ec2:StartInstances\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\", \"elasticloadbalancing:SetSecurityGroups\", \"iam:AddRoleToInstanceProfile\", \"iam:CreateInstanceProfile\", \"iam:DeleteInstanceProfile\", \"iam:GetInstanceProfile\", \"iam:TagInstanceProfile\", \"iam:GetRole\", \"iam:GetRolePolicy\", \"iam:GetUser\", \"iam:ListAttachedRolePolicies\", \"iam:ListInstanceProfiles\", \"iam:ListInstanceProfilesForRole\", \"iam:ListRolePolicies\", \"iam:ListRoles\", \"iam:ListUserPolicies\", \"iam:ListUsers\", \"iam:PassRole\", \"iam:RemoveRoleFromInstanceProfile\", \"iam:SimulatePrincipalPolicy\", \"iam:TagRole\", \"iam:UntagRole\", \"route53:ChangeResourceRecordSets\", \"route53:ChangeTagsForResource\", \"route53:CreateHostedZone\", \"route53:DeleteHostedZone\", \"route53:GetAccountLimit\", \"route53:GetChange\", \"route53:GetHostedZone\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"route53:ListTagsForResource\", \"route53:UpdateHostedZoneComment\", \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:DeleteObject\", \"s3:GetAccelerateConfiguration\", \"s3:GetBucketAcl\", \"s3:GetBucketCORS\", \"s3:GetBucketLocation\", \"s3:GetBucketLogging\", \"s3:GetBucketObjectLockConfiguration\", \"s3:GetBucketPolicy\", \"s3:GetBucketRequestPayment\", \"s3:GetBucketTagging\", \"s3:GetBucketVersioning\", \"s3:GetBucketWebsite\", \"s3:GetEncryptionConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetObject\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:GetObjectVersion\", \"s3:GetReplicationConfiguration\", \"s3:ListBucket\", \"s3:ListBucketVersions\", \"s3:PutBucketAcl\", \"s3:PutBucketPolicy\", \"s3:PutBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:PutObject\", \"s3:PutObjectAcl\", \"s3:PutObjectTagging\", \"servicequotas:GetServiceQuota\", \"servicequotas:ListAWSDefaultServiceQuotas\", \"sts:AssumeRole\", \"sts:AssumeRoleWithWebIdentity\", \"sts:GetCallerIdentity\", \"tag:GetResources\", \"tag:UntagResources\", \"kms:DescribeKey\", \"cloudwatch:GetMetricData\", \"ec2:CreateRoute\", \"ec2:DeleteRoute\", \"ec2:CreateVpcEndpoint\", \"ec2:DeleteVpcEndpoints\", \"ec2:CreateVpcEndpointServiceConfiguration\", \"ec2:DeleteVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:ModifyVpcEndpointServicePermissions\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\" ], \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat-managed\": \"true\" } } } ] }", "curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.18/sts_installer_core_permission_boundary_policy.json", "aws iam create-policy --policy-name rosa-core-permissions-boundary-policy --policy-document file://./rosa-installer-core.json --description \"ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink\"", "{ \"Policy\": { \"PolicyName\": \"rosa-core-permissions-boundary-policy\", \"PolicyId\": \"<Policy ID>\", \"Arn\": \"arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy\", \"Path\": \"/\", \"DefaultVersionId\": \"v1\", \"AttachmentCount\": 0, \"PermissionsBoundaryUsageCount\": 0, \"IsAttachable\": true, \"CreateDate\": \"<CreateDate>\", \"UpdateDate\": \"<UpdateDate>\" } }", "aws iam put-role-permissions-boundary --role-name ManagedOpenShift-Installer-Role --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy", "aws iam get-role --role-name ManagedOpenShift-Installer-Role --output text | grep PERMISSIONSBOUNDARY", "PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Policy", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:ModifyVpcEndpointServiceConfiguration\", \"route53:ListHostedZonesByVPC\", \"route53:CreateVPCAssociationAuthorization\", \"route53:AssociateVPCWithHostedZone\", \"route53:DeleteVPCAssociationAuthorization\", \"route53:DisassociateVPCFromHostedZone\", \"route53:ChangeResourceRecordSets\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:CreateDhcpOptions\", \"ec2:CreateInternetGateway\", \"ec2:CreateNatGateway\", \"ec2:CreateRouteTable\", \"ec2:CreateSubnet\", \"ec2:CreateVpc\", \"ec2:DeleteDhcpOptions\", \"ec2:DeleteInternetGateway\", \"ec2:DeleteNatGateway\", \"ec2:DeleteRouteTable\", \"ec2:DeleteSubnet\", \"ec2:DeleteVpc\", \"ec2:DetachInternetGateway\", \"ec2:DisassociateRouteTable\", \"ec2:ModifySubnetAttribute\", \"ec2:ModifyVpcAttribute\", \"ec2:ReplaceRouteTableAssociation\" ], \"Resource\": \"*\" } ] }", "rosa create operator-roles --mode manual --cluster <cluster_name>", "aws iam create-role --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent --assume-role-policy-document file://operator_cluster_csi_drivers_ebs_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent aws iam create-role --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials --assume-role-policy-document file://operator_machine_api_aws_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials aws iam create-role --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede --assume-role-policy-document file://operator_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede aws iam create-role --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden aws iam create-role --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials --assume-role-policy-document file://operator_ingress_operator_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials", "rosa create oidc-provider --mode manual --cluster <cluster_name>", "aws iam create-open-id-connect-provider --url https://oidc.op1.openshiftapps.com/<oidc_config_id> \\ 1 --client-id-list openshift sts.<aws_region>.amazonaws.com --thumbprint-list <thumbprint> 2", "rosa create oidc-provider --oidc-config-id <oidc_config_id> --mode auto -y", "I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/241rh9ql5gpu99d7leokhvkp8icnalpf'", "rosa create oidc-config --mode=auto --yes", "? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'", "export OIDC_ID=<oidc_config_id> 1", "echo USDOIDC_ID", "13cdr6b", "rosa list oidc-config", "ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN", "rosa create oidc-config --raw-files", "rosa create oidc-config --mode=<auto|manual>", "rosa create oidc-config --managed", "W: For a managed OIDC Config only auto mode is supported. However, you may choose the provider creation mode ? OIDC Provider creation mode: auto I: Setting up managed OIDC configuration I: Please run the following command to create a cluster with this oidc config rosa create cluster --sts --oidc-config-id 233jnu62i9aphpucsj9kueqlkr1vcgra I: Creating OIDC provider using 'arn:aws:iam::242819244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::242819244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/233jnu62i9aphpucsj9kueqlkr1vcgra'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/introduction_to_rosa/rosa-sts-about-iam-resources
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_amazon_web_services/preface-aws
Chapter 2. GraphQL API
Chapter 2. GraphQL API The GraphQL API endpoint, /api/v3/graphql , automatically runs shorter and simpler queries against target JVMs. These queries can run against a target JVM's active and archived recordings. Additionally, the API can run queries against general Cryostat archives. You can customize the queries to automate the following tasks for active or archived recordings: You can customize the queries to automate the following tasks for active or archived recordings: Archive Delete Start Stop When creating a custom query, you must specify specific information for the GraphQL API endpoint in your query. A POST request can then handle the information and send the information to Cryostat. The following example specifies information for the Graph QL API: The GraphQL API is more powerful than the HTTP REST API, which has limited workload capabilities. For example, the HTTP REST API would require you to create an API request for each copy of a recording that you want to start on each scaled replica that is inside a container on OpenShift. The GraphQL API can achieve this task in one API request, which improves the API's performance and reduces any network traffic for your Cryostat instance. Another limited workload capability for the HTTP REST API is that this API requires more user intervention, such as requiring you to write custom clients to parse API JSON responses when performing iterative actions on response data. The GraphQL API does not require you to complete this operation. Additional resources See Introduction to GraphQL (GraphQL) 2.1. Custom query creation with the GraphQL API You can use an HTTP client, such as HTTPie , to interact with the GraphQL API for generating custom queries. When using the API to create a query, you must specify specific values for data types and fields, so that a function can use these values to accurately locate the data you require. Consider a use case where you could automate a workflow that performs multiple operations in a single query. As an example, an HTTPie client request could send a query to Cryostat so that Cryostat could perform the following tasks: Take snapshot recordings of all your target JVM applications. Copy recording information from each application into the Cryostat archive. Create an automated rule that automatically starts a continuous monitoring recording for any detected target JVM applications. Note Many querying possibilities exist, so ensure you accurately determine values for the data types and fields of a query. Otherwise, you might get query results that do not match your requirements. The following examples demonstrate using the HTTP client to interact with the GraphQL API for generating a simple query and a complex query on Cryostat data. Example of a simple query for determining all known target JVMs that interact with a Cryostat instance The example sets specific values for the targetNodes element, such as name . After you run the query, the query returns a list of any target JVMs that match the criteria specified. Example of a complex query for determining all target applications visible to a Cryostat instance that belong to a particular pod The example sets the following specific values for the environmentNodes element, and the example does not return JVM target applications: <application_pod_name> : ensures the query targets a specific pod that operates in the same namespace as Cryostat. The descendantTargets : Provides an array of JVM target objects. doStartRecording : The GraphQL API starts JFR recording for each target JVM that the query lists. After you run the query, the query returns information about the JFR recording that you started, such as a list of active application nodes. For Red Hat OpenShift, this query would return Deployment and DeploymentConfig API objects, and pods that interact with Cryostat. The complex query demonstrates how the GraphQL API can perform a single API request that returns any JVM objects that interact with Cryostat and Red Hat OpenShift. The API request then starts a JFR recording on those returned objects.
[ "POST /api/beta/graphql HTTP/1.1 Accept: application/json, / ;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Content-Length: 171 Content-Type: application/json Host: localhost:8181 User-Agent: HTTPie/3.1.0", "https :8181/api/v1/targets HTTP/1.1 200 OK content-encoding: gzip content-length: 223 content-type: application/json { targetNodes { name nodeType labels { key value } alias connectUrl } }", "https -v :8181/api/v3/graphql query=@graphql/target-nodes-query.graphql POST /api/v3/graphql HTTP/1.1 Accept: application/json, / ;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Content-Length: 171 Content-Type: application/json Host: localhost:8181 User-Agent: HTTPie/2.6.0 { environmentNodes(filter: { name: \"<application_pod_name>\" }) { descendantTargets { target { doStartRecording(recording: { name: \"myrecording\", template: \"Profiling\", templateType: \"TARGET\", duration: 30 }) { name state } } } } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_advanced_cryostat_configurations/graphql-api_assembly_pluggable--discovery-api
Chapter 4. Setting up an authentication method for RHOSP
Chapter 4. Setting up an authentication method for RHOSP The high availability fence agents and resource agents support three authentication methods for communicating with RHOSP: Authentication with a clouds.yaml configuration file Authentication with an OpenRC environment script Authentication with a username and password through Pacemaker After determining the authentication method to use for the cluster, specify the appropriate authentication parameters when creating a fencing or cluster resource. 4.1. Authenticating with RHOSP by using a clouds.yaml file The procedures in this document that use a a clouds.yaml file for authentication use the clouds.yaml file shown in this procedure. Those procedures specify ha-example for the cloud= parameter , as defined in this file. Procedure On each node that will be part of your cluster, create a clouds.yaml file, as in the following example. For information about creating a clouds.yaml file, see Users and Identity Management Guide . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command, substituting the name of the cloud you specified in the clouds.yaml file you created for ha-example . If this command does not display a server list, contact your RHOSP administrator. Specify the cloud parameter when creating a cluster resource or a fencing resource. 4.2. Authenticating with RHOSP by using an OpenRC environment script To use an OpenRC environment script to authenticate with RHOSP, perform the following steps. Procedure On each node that will be part of your cluster, configure an OpenRC environment script. For information about creating an OpenRC environment script, see Set environment variables using the OpenStack RC file . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command. If this command does not display a server list, contact your RHOSP administrator. Specify the openrc parameter when creating a cluster resource or a fencing resource. 4.3. Authenticating with RHOSP by means of a username and password To authenticate with RHOSP by means of a username and password, specify the username , password , and auth_url parameters for a cluster resource or a fencing resource when you create the resource. Additional authentication parameters may be required, depending on the RHOSP configuration. The RHOSP administrator provides the authentication parameters to use.
[ "cat .config/openstack/clouds.yaml clouds: ha-example: auth: auth_url: https://<ip_address>:13000/ project_name: rainbow username: unicorns password: <password> user_domain_name: Default project_domain_name: Default <. . . additional options . . .> region_name: regionOne verify: False", "openstack --os-cloud=ha-example server list", "openstack server list" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/authentication-methods-for-rhosp_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/providing-feedback-on-red-hat-documentation_rhodf
Chapter 104. File Component
Chapter 104. File Component Available as of Camel version 1.0 The File component provides access to file systems, allowing files to be processed by any other Camel Components or messages from other components to be saved to disk. 104.1. URI format or Where directoryName represents the underlying file directory. You can append query options to the URI in the following format, ?option=value&option=value&... Only directories Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option, e.g. by setting fileName=thefilename . Also, the starting directory must not contain dynamic expressions with USD\{ } placeholders. Again use the fileName option to specify the dynamic part of the filename. Warning Avoid reading files currently being written by another application Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock options and doneFileName option that you can use. See also the section Consuming files from folders where others drop files directly . 104.2. URI Options The File component has no options. The File endpoint is configured using URI syntax: with the following path and query parameters: 104.2.1. Path Parameters (1 parameters): Name Description Default Type directoryName Required The starting directory File 104.2.2. Query Parameters (87 parameters): Name Description Default Type charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USDfile.name and USDfile.name.noext is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USDdate:now:yyyyMMdd.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean directoryMustExist (consumer) Similar to startingDirectoryMustExist but this applies during polling recursive sub directories. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern extendedAttributes (consumer) To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime String inProgressRepository (consumer) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionException Handler (consumer) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. In other words the error occurred while the polling was gathering information, for instance access to a file network failed so Camel cannot access it to scan for files. The default implementation will log the caused exception at WARN level and ignore it. PollingConsumerPoll Strategy probeContentType (consumer) Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE_CONTENT_TYPE on the Message. false boolean processStrategy (consumer) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcess Strategy startingDirectoryMustExist (consumer) Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn't exist. false boolean fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. Append - adds content to the existing file. Fail - throws a GenericFileOperationException, indicating that there is already an existing file. Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer) Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String chmodDirectory (producer) Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String eagerDeleteTargetFile (producer) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean forceWrites (producer) Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance. true boolean keepLastModified (producer) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided FileMoveExisting Strategy autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bufferSize (advanced) Write buffer sized in bytes. 131072 int copyAndDeleteOnRenameFail (advanced) Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component. true boolean renameUsingCopy (advanced) Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-senstive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USDdate:now:yyyMMdd String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USDfile:size 5000 String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USDfile:name-USDfile:size String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryMessageIdRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusive ReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: none - No read lock is in use markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. fileLock - is for using java.nio.channels.FileLock. This option is not avail for the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLock Files (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockIdempotentRelease Async (lock) Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockIdempotentRelease AsyncPoolSize (lock) The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. int readLockIdempotentRelease Delay (lock) Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. int readLockIdempotentRelease ExecutorService (lock) To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. ScheduledExecutor Service readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a WARN is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. The default value is 500. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. The default value is 1000. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. This option allows you to share a thread pool among multiple consumers. ScheduledExecutor Service scheduler (scheduler) Allow to plugin a custom org.apache.camel.spi.ScheduledPollConsumerScheduler to use as the scheduler for firing when the polling consumer runs. The default implementation uses the ScheduledExecutorService and there is a Quartz2, and Spring based which supports CRON expressions. Notice: If using a custom scheduler then the options for initialDelay, useFixedDelay, timeUnit, and scheduledExecutorService may not be in use. Use the text quartz2 to refer to use the Quartz2 scheduler; and use the text spring to use the Spring based; and use the text #myScheduler to refer to a custom scheduler by its id in the Registry. See Quartz2 page for an example. none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean shuffle (sort) To shuffle the list of files (sort in random order) false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator Tip Default behavior for file producer By default it will override any existing file, if one exist with the same name. 104.3. Move and Delete operations Any move or delete operations is executed after (post command) the routing has completed; so during processing of the Exchange the file is still located in the inbox folder. Lets illustrate this with an example: from("file://inbox?move=.done").to("bean:handleOrder"); When a file is dropped in the inbox folder, the file consumer notices this and creates a new FileExchange that is routed to the handleOrder bean. The bean then processes the File object. At this point in time the file is still located in the inbox folder. After the bean completes, and thus the route is completed, the file consumer will perform the move operation and move the file to the .done sub-folder. The move and the preMove options are considered as a directory name (though if you use an expression such as File Language , or Simple then the result of the expression evaluation is the file name to be used - eg if you set then that's using the File Language which we use return the file name to be used), which can be either relative or absolute. If relative, the directory is created as a sub-folder from within the folder where the file was consumed. By default, Camel will move consumed files to the .camel sub-folder relative to the directory where the file was consumed. If you want to delete the file after processing, the route should be: from("file://inobox?delete=true").to("bean:handleOrder"); We have introduced a pre move operation to move files before they are processed. This allows you to mark which files have been scanned as they are moved to this sub folder before being processed. from("file://inbox?preMove=inprogress").to("bean:handleOrder"); You can combine the pre move and the regular move: from("file://inbox?preMove=inprogress&move=.done").to("bean:handleOrder"); So in this situation, the file is in the inprogress folder when being processed and after it's processed, it's moved to the .done folder. 104.4. Fine grained control over Move and PreMove option The move and preMove options are Expression-based, so we have the full power of the File Language to do advanced configuration of the directory and name pattern. Camel will, in fact, internally convert the directory name you enter into a File Language expression. So when we enter move=.done Camel will convert this into: USD{ file:parent }/.done/USD{``file:onlyname }. This is only done if Camel detects that you have not provided a USD\{ } in the option value yourself. So when you enter a USD\{ } Camel will not convert it and thus you have the full power. So if we want to move the file into a backup folder with today's date as the pattern, we can do: 104.5. About moveFailed The moveFailed option allows you to move files that could not be processed succesfully to another location such as a error folder of your choice. For example to move the files in an error folder with a timestamp you can use moveFailed=/error/USD{ file:name.noext }-USD{date:now:yyyyMMddHHmmssSSS}.USD{``file:ext }. See more examples at File Language 104.6. Message Headers The following headers are supported by this component: 104.6.1. File producer only Header Description CamelFileName Specifies the name of the file to write (relative to the endpoint directory). This name can be a String ; a String with a File Language or Simple expression; or an Expression object. If it's null then Camel will auto-generate a filename based on the message unique ID. CamelFileNameProduced The actual absolute filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users with the name of the file that was written. CamelOverruleFileName Camel 2.11: Is used for overruling CamelFileName header and use the value instead (but only once, as the producer will remove this header after writing the file). The value can be only be a String. Notice that if the option fileName has been configured, then this is still being evaluated. 104.6.2. File consumer only Header Description CamelFileName Name of the consumed file as a relative file path with offset from the starting directory configured on the endpoint. CamelFileNameOnly Only the file name (the name with no leading paths). CamelFileAbsolute A boolean option specifying whether the consumed file denotes an absolute path or not. Should normally be false for relative paths. Absolute paths should normally not be used but we added to the move option to allow moving files to absolute paths. But can be used elsewhere as well. CamelFileAbsolutePath The absolute path to the file. For relative files this path holds the relative path instead. CamelFilePath The file path. For relative files this is the starting directory + the relative filename. For absolute files this is the absolute path. CamelFileRelativePath The relative path. CamelFileParent The parent path. CamelFileLength A long value containing the file size. CamelFileLastModified A Long value containing the last modified timestamp of the file. In Camel 2.10.3 and older the type is Date . 104.7. Batch Consumer This component implements the Batch Consumer. 104.8. Exchange Properties, file consumer only As the file consumer implements the BatchConsumer it supports batching the files it polls. By batching we mean that Camel will add the following additional properties to the Exchange, so you know the number of files polled, the current index, and whether the batch is already completed. Property Description CamelBatchSize The total number of files that was polled in this batch. CamelBatchIndex The current index of the batch. Starts from 0. CamelBatchComplete A boolean value indicating the last Exchange in the batch. Is only true for the last entry. This allows you for instance to know how many files exist in this batch and for instance let the Aggregator2 aggregate this number of files. 104.9. Using charset Available as of Camel 2.9.3 The charset option allows for configuring an encoding of the files on both the consumer and producer endpoints. For example if you read utf-8 files, and want to convert the files to iso-8859-1, you can do: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") You can also use the convertBodyTo in the route. In the example below we have still input files in utf-8 format, but we want to convert the file content to a byte array in iso-8859-1 format. And then let a bean process the data. Before writing the content to the outbox folder using the current charset. from("file:inbox?charset=utf-8") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); If you omit the charset on the consumer endpoint, then Camel does not know the charset of the file, and would by default use "UTF-8". However you can configure a JVM system property to override and use a different default encoding with the key org.apache.camel.default.charset . In the example below this could be a problem if the files is not in UTF-8 encoding, which would be the default encoding for read the files. In this example when writing the files, the content has already been converted to a byte array, and thus would write the content directly as is (without any further encodings). from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); You can also override and control the encoding dynamic when writing files, by setting a property on the exchange with the key Exchange.CHARSET_NAME . For example in the route below we set the property with a value from a message header. from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .setProperty(Exchange.CHARSET_NAME, header("someCharsetHeader")) .to("file:outbox"); We suggest to keep things simpler, so if you pickup files with the same encoding, and want to write the files in a specific encoding, then favor to use the charset option on the endpoints. Notice that if you have explicit configured a charset option on the endpoint, then that configuration is used, regardless of the Exchange.CHARSET_NAME property. If you have some issues then you can enable DEBUG logging on org.apache.camel.component.file , and Camel logs when it reads/write a file using a specific charset. For example the route below will log the following: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") And the logs: 104.10. Common gotchas with folder and filenames When Camel is producing files (writing files) there are a few gotchas affecting how to set a filename of your choice. By default, Camel will use the message ID as the filename, and since the message ID is normally a unique generated ID, you will end up with filenames such as: ID-MACHINENAME-2443-1211718892437-1-0 . If such a filename is not desired, then you must provide a filename in the CamelFileName message header. The constant, Exchange.FILE_NAME , can also be used. The sample code below produces files using the message ID as the filename: from("direct:report").to("file:target/reports"); To use report.txt as the filename you have to do: from("direct:report").setHeader(Exchange.FILE_NAME, constant("report.txt")).to( "file:target/reports"); the same as above, but with CamelFileName : from("direct:report").setHeader("CamelFileName", constant("report.txt")).to( "file:target/reports"); And a syntax where we set the filename on the endpoint with the fileName URI option. from("direct:report").to("file:target/reports/?fileName=report.txt"); 104.11. Filename Expression Filename can be set either using the expression option or as a string-based File Language expression in the CamelFileName header. See the File Language for syntax and samples. 104.12. Consuming files from folders where others drop files directly Beware if you consume files from a folder where other applications write files to directly. Take a look at the different readLock options to see what suits your use cases. The best approach is however to write to another folder and after the write move the file in the drop folder. However if you write files directly to the drop folder then the option changed could better detect whether a file is currently being written/copied as it uses a file changed algorithm to see whether the file size / modification changes over a period of time. The other readLock options rely on Java File API that sadly is not always very good at detecting this. You may also want to look at the doneFileName option, which uses a marker file (done file) to signal when a file is done and ready to be consumed. 104.13. Using done files Available as of Camel 2.6 See also section writing done files below. If you want only to consume files when a done file exists, then you can use the doneFileName option on the endpoint. from("file:bar?doneFileName=done"); Will only consume files from the bar folder, if a done file exists in the same directory as the target files. Camel will automatically delete the done file when it's done consuming the files. From Camel 2.9.3 onwards Camel will not automatically delete the done file if noop=true is configured. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. The consumer only supports the static part of the done file name as either prefix or suffix (not both). from("file:bar?doneFileName=USD{file:name}.done"); In this example only files will be polled if there exists a done file with the name file name .done. For example hello.txt - is the file to be consumed hello.txt.done - is the associated done file You can also use a prefix for the done file, such as: from("file:bar?doneFileName=ready-USD{file:name}"); hello.txt - is the file to be consumed ready-hello.txt - is the associated done file 104.14. Writing done files Available as of Camel 2.6 After you have written a file you may want to write an additional done file as a kind of marker, to indicate to others that the file is finished and has been written. To do that you can use the doneFileName option on the file producer endpoint. .to("file:bar?doneFileName=done"); Will simply create a file named done in the same directory as the target file. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. .to("file:bar?doneFileName=done-USD{file:name}"); Will for example create a file named done-foo.txt if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name}.done"); Will for example create a file named foo.txt.done if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name.noext}.done"); Will for example create a file named foo.done if the target file was foo.txt in the same directory as the target file. 104.15. Samples #=== Read from a directory and write to another directory from("file://inputdir/?delete=true").to("file://outputdir") 104.15.1. Read from a directory and write to another directory using a overrule dynamic name from("file://inputdir/?delete=true").to("file://outputdir?overruleFile=copy-of-USD{file:name}") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . 104.15.2. Reading recursively from a directory and writing to another from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . Will scan recursively into sub-directories. Will lay out the files in the same directory structure in the outputdir as the inputdir , including any sub-directories. Will result in the following output layout: 104.16. Using flatten If you want to store the files in the outputdir directory in the same directory, disregarding the source directory layout (e.g. to flatten out the path), you just add the flatten=true option on the file producer side: from("file://inputdir/?recursive=true&delete=true").to("file://outputdir?flatten=true") Will result in the following output layout: 104.17. Reading from a directory and the default move operation Camel will by default move any processed file into a .camel subdirectory in the directory the file was consumed from. from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Affects the layout as follows: before after 104.18. Read from a directory and process the message in java from("file://inputdir/").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } }); The body will be a File object that points to the file that was just dropped into the inputdir directory. 104.19. Writing to files Camel is of course also able to write files, i.e. produce files. In the sample below we receive some reports on the SEDA queue that we process before they are being written to a directory. 104.19.1. Write to subdirectory using Exchange.FILE_NAME Using a single route, it is possible to write a file to any number of subdirectories. If you have a route setup as such: <route> <from uri="bean:myBean"/> <to uri="file:/rootDirectory"/> </route> You can have myBean set the header Exchange.FILE_NAME to values such as: This allows you to have a single route to write files to multiple destinations. 104.19.2. Writing file through the temporary directory relative to the final destination Sometime you need to temporarily write the files to some directory relative to the destination directory. Such situation usually happens when some external process with limited filtering capabilities is reading from the directory you are writing to. In the example below files will be written to the /var/myapp/filesInProgress directory and after data transfer is done, they will be atomically moved to the` /var/myapp/finalDirectory `directory. from("direct:start"). to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/"); 104.20. Using expression for filenames In this sample we want to move consumed files to a backup folder using today's date as a sub-folder name: from("file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}").to("..."); See File Language for more samples. 104.21. Avoiding reading the same file more than once (idempotent consumer) Camel supports Idempotent Consumer directly within the component so it will skip already processed files. This feature can be enabled by setting the idempotent=true option. from("file://inbox?idempotent=true").to("..."); Camel uses the absolute file name as the idempotent key, to detect duplicate files. From Camel 2.11 onwards you can customize this key by using an expression in the idempotentKey option. For example to use both the name and the file size as the key <route> <from uri="file://inbox?idempotent=true&amp;idempotentKey=USD{file:name}-USD{file:size}"/> <to uri="bean:processInbox"/> </route> By default Camel uses a in memory based store for keeping track of consumed files, it uses a least recently used cache holding up to 1000 entries. You can plugin your own implementation of this store by using the idempotentRepository option using the # sign in the value to indicate it's a referring to a bean in the Registry with the specified id . <!-- define our store as a plain spring bean --> <bean id="myStore" class="com.mycompany.MyIdempotentStore"/> <route> <from uri="file://inbox?idempotent=true&amp;idempotentRepository=#myStore"/> <to uri="bean:processInbox"/> </route> Camel will log at DEBUG level if it skips a file because it has been consumed before: 104.22. Using a file based idempotent repository In this section we will use the file based idempotent repository org.apache.camel.processor.idempotent.FileIdempotentRepository instead of the in-memory based that is used as default. This repository uses a 1st level cache to avoid reading the file repository. It will only use the file repository to store the content of the 1st level cache. Thereby the repository can survive server restarts. It will load the content of the file into the 1st level cache upon startup. The file structure is very simple as it stores the key in separate lines in the file. By default, the file store has a size limit of 1mb. When the file grows larger Camel will truncate the file store, rebuilding the content by flushing the 1st level cache into a fresh empty file. We configure our repository using Spring XML creating our file idempotent repository and define our file consumer to use our repository with the idempotentRepository using # sign to indicate Registry lookup: 104.23. Using a JPA based idempotent repository In this section we will use the JPA based idempotent repository instead of the in-memory based that is used as default. First we need a persistence-unit in META-INF/persistence.xml where we need to use the class org.apache.camel.processor.idempotent.jpa.MessageProcessed as model. <persistence-unit name="idempotentDb" transaction-type="RESOURCE_LOCAL"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name="openjpa.ConnectionURL" value="jdbc:derby:target/idempotentTest;create=true"/> <property name="openjpa.ConnectionDriverName" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema"/> <property name="openjpa.Log" value="DefaultLevel=WARN, Tool=INFO"/> <property name="openjpa.Multithreaded" value="true"/> </properties> </persistence-unit> , we can create our JPA idempotent repository in the spring XML file as well: <!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id="jpaStore" class="org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index="0" ref="entityManagerFactory"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index="1" value="FileConsumer"/> </bean> And yes then we just need to refer to the jpaStore bean in the file consumer endpoint using the idempotentRepository using the # syntax option: <route> <from uri="file://inbox?idempotent=true&amp;idempotentRepository=#jpaStore"/> <to uri="bean:processInbox"/> </route> 104.24. Filter using org.apache.camel.component.file.GenericFileFilter Camel supports pluggable filtering strategies. You can then configure the endpoint with such a filter to skip certain files being processed. In the sample we have built our own filter that skips files starting with skip in the filename: And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file: <!-- define our filter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="file://inbox?filter=#myFilter"/> <to uri="bean:processInbox"/> </route> 104.25. Filtering using ANT path matcher The ANT path matcher is shipped out-of-the-box in the camel-spring jar. So you need to depend on camel-spring if you are using Maven. The reasons is that we leverage Spring's AntPathMatcher to do the actual matching. The file paths is matched with the following rules: ? matches one character * matches zero or more characters ** matches zero or more directories in a path Tip New options from Camel 2.10 onwards There are now antInclude and antExclude options to make it easy to specify ANT style include/exclude without having to define the filter. See the URI options above for more information. The sample below demonstrates how to use it: 104.25.1. Sorting using Comparator Camel supports pluggable sorting strategies. This strategy it to use the build in java.util.Comparator in Java. You can then configure the endpoint with such a comparator and have Camel sort the files before being processed. In the sample we have built our own comparator that just sorts by file name: And then we can configure our route using the sorter option to reference to our sorter ( mySorter ) we have defined in the spring XML file: <!-- define our sorter as a plain spring bean --> <bean id="mySorter" class="com.mycompany.MyFileSorter"/> <route> <from uri="file://inbox?sorter=#mySorter"/> <to uri="bean:processInbox"/> </route> Tip URI options can reference beans using the # syntax In the Spring DSL route above notice that we can refer to beans in the Registry by prefixing the id with # . So writing sorter=#mySorter , will instruct Camel to go look in the Registry for a bean with the ID, mySorter . 104.25.2. Sorting using sortBy Camel supports pluggable sorting strategies. This strategy it to use the File Language to configure the sorting. The sortBy option is configured as follows: Where each group is separated with semi colon. In the simple situations you just use one group, so a simple example could be: This will sort by file name, you can reverse the order by prefixing reverse: to the group, so the sorting is now Z..A: As we have the full power of File Language we can use some of the other parameters, so if we want to sort by file size we do: You can configure to ignore the case, using ignoreCase: for string comparison, so if you want to use file name sorting but to ignore the case then we do: You can combine ignore case and reverse, however reverse must be specified first: In the sample below we want to sort by last modified file, so we do: And then we want to group by name as a 2nd option so files with same modifcation is sorted by name: Now there is an issue here, can you spot it? Well the modified timestamp of the file is too fine as it will be in milliseconds, but what if we want to sort by date only and then subgroup by name? Well as we have the true power of File Language we can use its date command that supports patterns. So this can be solved as: Yeah, that is pretty powerful, oh by the way you can also use reverse per group, so we could reverse the file names: 104.26. Using GenericFileProcessStrategy The option processStrategy can be used to use a custom GenericFileProcessStrategy that allows you to implement your own begin , commit and rollback logic. For instance lets assume a system writes a file in a folder you should consume. But you should not start consuming the file before another ready file has been written as well. So by implementing our own GenericFileProcessStrategy we can implement this as: In the begin() method we can test whether the special ready file exists. The begin method returns a boolean to indicate if we can consume the file or not. In the abort() method (Camel 2.10) special logic can be executed in case the begin operation returned false , for example to cleanup resources etc. in the commit() method we can move the actual file and also delete the ready file. 104.27. Using filter The filter option allows you to implement a custom filter in Java code by implementing the org.apache.camel.component.file.GenericFileFilter interface. This interface has an accept method that returns a boolean. Return true to include the file, and false to skip the file. From Camel 2.10 onwards, there is a isDirectory method on GenericFile whether the file is a directory. This allows you to filter unwanted directories, to avoid traversing down unwanted directories. For example to skip any directories which starts with "skip" in the name, can be implemented as follows: 104.28. Using consumer.bridgeErrorHandler Available as of Camel 2.10 If you want to use the Camel Error Handler to deal with any exception occurring in the file consumer, then you can enable the consumer.bridgeErrorHandler option as shown below: // to handle any IOException being thrown onException(IOException.class) .handled(true) .log("IOException occurred due: USD{exception.message}") .transform().simple("Error USD{exception.message}") .to("mock:error"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from("file:target/nospace?consumer.bridgeErrorHandler=true") .convertBodyTo(String.class) .to("mock:result"); So all you have to do is to enable this option, and the error handler in the route will take it from there. Important Important when using consumer.bridgeErrorHandler When using consumer.bridgeErrorHandler, then interceptors, OnCompletions does not apply. The Exchange is processed directly by the Camel Error Handler, and does not allow prior actions such as interceptors, onCompletion to take action. 104.29. Debug logging This component has log level TRACE that can be helpful if you have problems. 104.30. See Also File Language FTP Polling Consumer
[ "file:directoryName[?options]", "file://directoryName[?options]", "file:directoryName", "from(\"file://inbox?move=.done\").to(\"bean:handleOrder\");", "move=../backup/copy-of-USD{file:name}", "from(\"file://inobox?delete=true\").to(\"bean:handleOrder\");", "from(\"file://inbox?preMove=inprogress\").to(\"bean:handleOrder\");", "from(\"file://inbox?preMove=inprogress&move=.done\").to(\"bean:handleOrder\");", "move=backup/USD{date:now:yyyyMMdd}/USD{file:name}", "from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")", "from(\"file:inbox?charset=utf-8\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");", "from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");", "from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .setProperty(Exchange.CHARSET_NAME, header(\"someCharsetHeader\")) .to(\"file:outbox\");", "from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")", "DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8 DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1", "from(\"direct:report\").to(\"file:target/reports\");", "from(\"direct:report\").setHeader(Exchange.FILE_NAME, constant(\"report.txt\")).to( \"file:target/reports\");", "from(\"direct:report\").setHeader(\"CamelFileName\", constant(\"report.txt\")).to( \"file:target/reports\");", "from(\"direct:report\").to(\"file:target/reports/?fileName=report.txt\");", "from(\"file:bar?doneFileName=done\");", "from(\"file:bar?doneFileName=USD{file:name}.done\");", "from(\"file:bar?doneFileName=ready-USD{file:name}\");", ".to(\"file:bar?doneFileName=done\");", ".to(\"file:bar?doneFileName=done-USD{file:name}\");", ".to(\"file:bar?doneFileName=USD{file:name}.done\");", ".to(\"file:bar?doneFileName=USD{file:name.noext}.done\");", "from(\"file://inputdir/?delete=true\").to(\"file://outputdir\")", "from(\"file://inputdir/?delete=true\").to(\"file://outputdir?overruleFile=copy-of-USD{file:name}\")", "from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")", "inputdir/foo.txt inputdir/sub/bar.txt", "outputdir/foo.txt outputdir/sub/bar.txt", "from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir?flatten=true\")", "outputdir/foo.txt outputdir/bar.txt", "from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")", "inputdir/foo.txt inputdir/sub/bar.txt", "inputdir/.camel/foo.txt inputdir/sub/.camel/bar.txt outputdir/foo.txt outputdir/sub/bar.txt", "from(\"file://inputdir/\").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } });", "<route> <from uri=\"bean:myBean\"/> <to uri=\"file:/rootDirectory\"/> </route>", "Exchange.FILE_NAME = hello.txt => /rootDirectory/hello.txt Exchange.FILE_NAME = foo/bye.txt => /rootDirectory/foo/bye.txt", "from(\"direct:start\"). to(\"file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/\");", "from(\"file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}\").to(\"...\");", "from(\"file://inbox?idempotent=true\").to(\"...\");", "<route> <from uri=\"file://inbox?idempotent=true&amp;idempotentKey=USD{file:name}-USD{file:size}\"/> <to uri=\"bean:processInbox\"/> </route>", "<!-- define our store as a plain spring bean --> <bean id=\"myStore\" class=\"com.mycompany.MyIdempotentStore\"/> <route> <from uri=\"file://inbox?idempotent=true&amp;idempotentRepository=#myStore\"/> <to uri=\"bean:processInbox\"/> </route>", "DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\\idempotent\\report.txt", "<persistence-unit name=\"idempotentDb\" transaction-type=\"RESOURCE_LOCAL\"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name=\"openjpa.ConnectionURL\" value=\"jdbc:derby:target/idempotentTest;create=true\"/> <property name=\"openjpa.ConnectionDriverName\" value=\"org.apache.derby.jdbc.EmbeddedDriver\"/> <property name=\"openjpa.jdbc.SynchronizeMappings\" value=\"buildSchema\"/> <property name=\"openjpa.Log\" value=\"DefaultLevel=WARN, Tool=INFO\"/> <property name=\"openjpa.Multithreaded\" value=\"true\"/> </properties> </persistence-unit>", "<!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id=\"jpaStore\" class=\"org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository\"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index=\"0\" ref=\"entityManagerFactory\"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index=\"1\" value=\"FileConsumer\"/> </bean>", "<route> <from uri=\"file://inbox?idempotent=true&amp;idempotentRepository=#jpaStore\"/> <to uri=\"bean:processInbox\"/> </route>", "<!-- define our filter as a plain spring bean --> <bean id=\"myFilter\" class=\"com.mycompany.MyFileFilter\"/> <route> <from uri=\"file://inbox?filter=#myFilter\"/> <to uri=\"bean:processInbox\"/> </route>", "<!-- define our sorter as a plain spring bean --> <bean id=\"mySorter\" class=\"com.mycompany.MyFileSorter\"/> <route> <from uri=\"file://inbox?sorter=#mySorter\"/> <to uri=\"bean:processInbox\"/> </route>", "sortBy=group 1;group 2;group 3;", "sortBy=file:name", "sortBy=reverse:file:name", "sortBy=file:length", "sortBy=ignoreCase:file:name", "sortBy=reverse:ignoreCase:file:name", "sortBy=file:modified", "sortBy=file:modified;file:name", "sortBy=date:file:yyyyMMdd;file:name", "sortBy=date:file:yyyyMMdd;reverse:file:name", "// to handle any IOException being thrown onException(IOException.class) .handled(true) .log(\"IOException occurred due: USD{exception.message}\") .transform().simple(\"Error USD{exception.message}\") .to(\"mock:error\"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from(\"file:target/nospace?consumer.bridgeErrorHandler=true\") .convertBodyTo(String.class) .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/file-component
8.3.3. Reverting and Repeating Transactions
8.3.3. Reverting and Repeating Transactions Apart from reviewing the transaction history, the yum history command provides means to revert or repeat a selected transaction. To revert a transaction, type the following at a shell prompt as root : yum history undo id To repeat a particular transaction, as root , run the following command: yum history redo id Both commands also accept the last keyword to undo or repeat the latest transaction. Note that both yum history undo and yum history redo commands only revert or repeat the steps that were performed during a transaction. If the transaction installed a new package, the yum history undo command will uninstall it, and if the transaction uninstalled a package the command will again install it. This command also attempts to downgrade all updated packages to their version, if these older packages are still available. When managing several identical systems, Yum also allows you to perform a transaction on one of them, store the transaction details in a file, and after a period of testing, repeat the same transaction on the remaining systems as well. To store the transaction details to a file, type the following at a shell prompt as root : yum -q history addon-info id saved_tx > file_name Once you copy this file to the target system, you can repeat the transaction by using the following command as root : yum load-transaction file_name Note, however that the rpmdb version stored in the file must be identical to the version on the target system. You can verify the rpmdb version by using the yum version nogroups command.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec2-yum-transaction_history-reverting
Chapter 13. Authentication and Interoperability
Chapter 13. Authentication and Interoperability Manual Backup and Restore Functionality This update introduces the ipa-backup and ipa-restore commands to Identity Management (IdM), which allow users to manually back up their IdM data and restore them in case of a hardware failure. For further information, see the ipa-backup (1) and ipa-restore (1) manual pages or the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Support for Migration from WinSync to Trust This update implements the new ID Views mechanism of user configuration. It enables the migration of Identity Management users from a WinSync synchronization-based architecture used by Active Directory to an infrastructure based on Cross-Realm Trusts. For the details of ID Views and the migration procedure, see the documentation in the Windows Integration Guide . One-Time Password Authentication One of the best ways to increase authentication security is to require two factor authentication (2FA). A very popular option is to use one-time passwords (OTP). This technique began in the proprietary space, but over time some open standards emerged (HOTP: RFC 4226, TOTP: RFC 6238). Identity Management in Red Hat Enterprise Linux 7.1 contains the first implementation of the standard OTP mechanism. For further details, see the documentation in the System-Level Authentication Guide . SSSD Integration for the Common Internet File System A plug-in interface provided by SSSD has been added to configure the way in which the cifs-utils utility conducts the ID-mapping process. As a result, an SSSD client can now access a CIFS share with the same functionality as a client running the Winbind service. For further information, see the documentation in the Windows Integration Guide . Certificate Authority Management Tool The ipa-cacert-manage renew command has been added to the Identity management (IdM) client, which makes it possible to renew the IdM Certification Authority (CA) file. This enables users to smoothly install and set up IdM using a certificate signed by an external CA. For details on this feature, see the ipa-cacert-manage (1) manual page. Increased Access Control Granularity It is now possible to regulate read permissions of specific sections in the Identity Management (IdM) server UI. This allows IdM server administrators to limit the accessibility of privileged content only to chosen users. In addition, authenticated users of the IdM server no longer have read permissions to all of its contents by default. These changes improve the overall security of the IdM server data. Limited Domain Access for Unprivileged Users The domains= option has been added to the pam_sss module, which overrides the domains= option in the /etc/sssd/sssd.conf file. In addition, this update adds the pam_trusted_users option, which allows the user to add a list of numerical UIDs or user names that are trusted by the SSSD daemon, and the pam_public_domains option and a list of domains accessible even for untrusted users. The mentioned additions allow the configuration of systems, where regular users are allowed to access the specified applications, but do not have login rights on the system itself. For additional information on this feature, see the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Automatic data provider configuration The ipa-client-install command now by default configures SSSD as the data provider for the sudo service. This behavior can be disabled by using the --no-sudo option. In addition, the --nisdomain option has been added to specify the NIS domain name for the Identity Management client installation, and the --no_nisdomain option has been added to avoid setting the NIS domain name. If neither of these options are used, the IPA domain is used instead. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7.1, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. 32-bit Version of krb5-server and krb5-server-ldap Deprecated The 32-bit version of Kerberos 5 Server is no longer distributed, and the following packages are deprecated since Red Hat Enterprise Linux 7.1: krb5-server.i686 , krb5-server.s390 , krb5-server.ppc , krb5-server-ldap.i686 , krb5-server-ldap.s390 , and krb5-server-ldap.ppc . There is no need to distribute the 32-bit version of krb5-server on Red Hat Enterprise Linux 7, which is supported only on the following architectures: AMD64 and Intel 64 systems ( x86_64 ), 64-bit IBM Power Systems servers ( ppc64 ), and IBM System z ( s390x ). SSSD Leverages GPO Policies to Define HBAC SSSD is now able to use GPO objects stored on an AD server for access control. This enhancement mimics the functionality of Windows clients, allowing to use a single set of access control rules to handle both Windows and Unix machines. In effect, Windows administrators can now use GPOs to control access to Linux clients. Apache Modules for IPA A set of Apache modules has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-authentication_and_interoperability
Chapter 1. Debezium Overview
Chapter 1. Debezium Overview Red Hat build of Debezium is a distributed platform that captures database operations, creates data change event records for row-level operations, and streams change event records to Apache Kafka topics. Debezium is built on Apache Kafka and is deployed and integrated with Streams for Apache Kafka. Debezium captures row-level changes to a database table and passes corresponding change events to Streams for Apache Kafka. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium is the upstream community project for Red Hat build of Debezium. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries Debezium provides Apache Kafka Connect connectors for the following common databases: Db2 MariaDB MySQL MongoDB Oracle PostgreSQL SQL Server
null
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/installing_debezium_on_rhel/debezium-overview-debezium
Chapter 3. Getting Red Hat Quay release notifications
Chapter 3. Getting Red Hat Quay release notifications To keep up with the latest Red Hat Quay releases and other changes related to Red Hat Quay, you can sign up for update notifications on the Red Hat Customer Portal . After signing up for notifications, you will receive notifications letting you know when there is new a Red Hat Quay version, updated documentation, or other Red Hat Quay news. Log into the Red Hat Customer Portal with your Red Hat customer account credentials. Select your user name (upper-right corner) to see Red Hat Account and Customer Portal selections: Select Notifications. Your profile activity page appears. Select the Notifications tab. Select Manage Notifications. Select Follow, then choose Products from the drop-down box. From the drop-down box to the Products, search for and select Red Hat Quay: Select the SAVE NOTIFICATION button. Going forward, you will receive notifications when there are changes to the Red Hat Quay product, such as a new release.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/release-notifications
17.8. Managing Tokens Used by the Subsystems
17.8. Managing Tokens Used by the Subsystems Certificate System managers two groups of tokens: tokens used by the subsystems to perform PKI tasks and tokens issued through the subsystem. These management tasks refer specifically to tokens that are used by the subsystems. For information on managing smart card tokens, see Chapter 6, Using and Configuring the Token Management System: TPS and TKS . 17.8.1. Detecting Tokens To see if a token can be detected by Certificate System to be installed or configured, use the TokenInfo utility. This utility will return all tokens which can be detected by the Certificate System, not only tokens which are installed in the Certificate System. 17.8.2. Viewing Tokens To view a list of the tokens currently installed for a Certificate System instance, use the modutil utility. Open the instance alias directory. For example: Show the information about the installed PKCS #11 modules installed as well as information on the corresponding tokens using the modutil tool. 17.8.3. Changing a Token's Password The token, internal or external, that stores the key pairs and certificates for the subsystems is protected (encrypted) by a password. To decrypt the key pairs or to gain access to them, enter the token password. This password is set when the token is first accessed, usually during Certificate System installation. It is good security practice to change the password that protects the server's keys and certificates periodically. Changing the password minimizes the risk of someone finding out the password. To change a token's password, use the certutil command-line utility. For information about certutil , see http://www.mozilla.org/projects/security/pki/nss/tools/ . The single sign-on password cache stores token passwords in the password.conf file. This file must be manually updated every time the token password is changed. For more information on managing passwords through the password.conf file, see Red Hat Certificate System Planning, Installation, and Deployment Guide .
[ "TokenInfo /var/lib/pki/ instance_name /alias Database Path: /var/lib/pki/ instance_name /alias Found external module 'NSS Internal PKCS #11 Module'", "cd /var/lib/pki/ instance_name /alias", "modutil -dbdir . -nocertdb -list" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Managing_Tokens_Used_by_the_Subsystems
function::user_char_warn
function::user_char_warn Name function::user_char_warn - Retrieves a char value stored in user space Synopsis Arguments addr the user space address to retrieve the char from Description Returns the char value from a given user space address. Returns zero when user space and warns (but does not abort) about the failure.
[ "user_char_warn:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-char-warn
Chapter 1. Overview
Chapter 1. Overview Red Hat OpenShift Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. Red Hat OpenShift Data Foundation is integrated into the latest Red Hat OpenShift Container Platform to address platform services, application portability, and persistence challenges.It provides a highly scalable backend for the generation of cloud-native applications, built on a technology stack that includes Red Hat Ceph Storage, the Rook.io Operator, and NooBaa's Multicloud Object Gateway technology. Red Hat OpenShift Data Foundation is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS Validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . Red Hat OpenShift Data Foundation provides a trusted, enterprise-grade application development environment that simplifies and enhances the user experience across the application lifecycle in a number of ways: Provides block storage for databases. Shared file storage for continuous integration, messaging, and data aggregation. Object storage for cloud-first development, archival, backup, and media storage. Scale applications and data exponentially. Attach and detach persistent data volumes at an accelerated rate. Stretch clusters across multiple data-centers or availability zones. Establish a comprehensive application container registry. Support the generation of OpenShift workloads such as Data Analytics, Artificial Intelligence, Machine Learning, Deep Learning, and Internet of Things (IoT). Dynamically provision not only application containers, but data service volumes and containers, as well as additional OpenShift Container Platform nodes, Elastic Block Store (EBS) volumes and other infrastructure services. 1.1. About this release Red Hat OpenShift Data Foundation 4.14 ( RHSA-2023:6832 ) is now available. New enhancements, features, and known issues that pertain to OpenShift Data Foundation 4.14 are included in this topic. Red Hat OpenShift Data Foundation 4.14 is supported on the Red Hat OpenShift Container Platform version 4.14. For more information, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For Red Hat OpenShift Data Foundation life cycle information, refer to the layered and dependent products life cycle section in Red Hat OpenShift Container Platform Life Cycle Policy .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/overview
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.14 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi9 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc create -f controller.yml", "oc sa get-token migration-controller -n openshift-migration", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i", "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./", "oc config view", "crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>", "crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin", "oc get po -n <namespace>", "NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s", "oc logs -f -n <namespace> <pod_name> -c openvpn", "oc get service -n <namespace>", "oc sa get-token -n openshift-migration migration-controller", "oc create route passthrough --service=docker-registry -n default", "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe MigCluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman pull <registry_url>:<port>/openshift/<image>", "oc get bc --all-namespaces --template='range .items \"BuildConfig:\" .metadata.namespace/.metadata.name => \"\\t\"\"ImageStream(FROM):\" .spec.strategy.sourceStrategy.from.namespace/.spec.strategy.sourceStrategy.from.name \"\\t\"\"ImageStream(TO):\" .spec.output.to.namespace/.spec.output.to.name end'", "podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2", "podman push <registry_url>:<port>/openshift/<image> 1", "oc get imagestream -n openshift | grep <image>", "NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "spec: restic_supplemental_groups: - 5555 - 6666", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/migrating_from_version_3_to_4/index
3.2.2. RT Tunable Parameters
3.2.2. RT Tunable Parameters The RT scheduler works in a similar way to the ceiling enforcement control of the CFS (for more information, refer to Section 3.2.1, "CFS Tunable Parameters" ) but limits CPU access to real-time tasks only. The amount of time for which real-time task can access the CPU is decided by allocating a run time and a period for each cgroup. All tasks in a cgroup are then allowed to access the CPU for the defined period of time for one run time (for example, tasks in a cgroup can be allowed to run 0.1 seconds in every 1 second). Please note that the following parameters take into account all available logical CPUs, therefore the access times specified with these parameters are in fact multiplied by the number of CPUs. cpu.rt_period_us applicable to real-time scheduling tasks only, this parameter specifies a period of time in microseconds (ms, represented here as " us ") for how regularly a cgroup's access to CPU resources is reallocated. cpu.rt_runtime_us applicable to real-time scheduling tasks only, this parameter specifies a period of time in microseconds (ms, represented here as " us ") for the longest continuous period in which the tasks in a cgroup have access to CPU resources. Establishing this limit prevents tasks in one cgroup from monopolizing CPU time. As mentioned above, the access times are multiplied by the number of logical CPUs. For example, setting cpu.rt_runtime_us to 200000 and cpu.rt_period_us to 1000000 translates to the task being able to access a single CPU for 0.4 seconds out of every 1 second on systems with two CPUs (0.2 x 2), or 0.8 seconds on systems with four CPUs (0.2 x 4).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sect-rt_options
Chapter 12. NodeMetrics [metrics.k8s.io/v1beta1]
Chapter 12. NodeMetrics [metrics.k8s.io/v1beta1] Description NodeMetrics sets resource usage metrics of a node. Type object Required timestamp window usage 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata timestamp Time The following fields define time interval from which metrics were collected from the interval [Timestamp-Window, Timestamp]. usage object (Quantity) The memory usage is the memory working set. window Duration 12.2. API endpoints The following API endpoints are available: /apis/metrics.k8s.io/v1beta1/nodes GET : list objects of kind NodeMetrics /apis/metrics.k8s.io/v1beta1/nodes/{name} GET : read the specified NodeMetrics 12.2.1. /apis/metrics.k8s.io/v1beta1/nodes HTTP method GET Description list objects of kind NodeMetrics Table 12.1. HTTP responses HTTP code Reponse body 200 - OK NodeMetricsList schema 12.2.2. /apis/metrics.k8s.io/v1beta1/nodes/{name} Table 12.2. Global path parameters Parameter Type Description name string name of the NodeMetrics HTTP method GET Description read the specified NodeMetrics Table 12.3. HTTP responses HTTP code Reponse body 200 - OK NodeMetrics schema
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/nodemetrics-metrics-k8s-io-v1beta1
Managing networking infrastructure services
Managing networking infrastructure services Red Hat Enterprise Linux 9 A guide to managing networking infrastructure services in Red Hat Enterprise Linux 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/index
Chapter 5. Configuring JVM memory usage
Chapter 5. Configuring JVM memory usage Control how Data Grid stores data in JVM memory by: Managing JVM memory usage with eviction that automatically removes data from caches. Adding lifespan and maximum idle times to expire entries and prevent stale data. Configuring Data Grid to store data in off-heap, native memory. 5.1. Default memory configuration By default Data Grid stores cache entries as objects in the JVM heap. Over time, as applications add entries, the size of caches can exceed the amount of memory that is available to the JVM. Likewise, if Data Grid is not the primary data store, then entries become out of date which means your caches contain stale data. XML <distributed-cache> <memory storage="HEAP"/> </distributed-cache> JSON { "distributed-cache": { "memory" : { "storage": "HEAP" } } } YAML distributedCache: memory: storage: "HEAP" 5.2. Eviction and expiration Eviction and expiration are two strategies for cleaning the data container by removing old, unused entries. Although eviction and expiration are similar, they have some important differences. [✓] Eviction lets Data Grid control the size of the data container by removing entries when the container becomes larger than a configured threshold. [✓] Expiration limits the amount of time entries can exist. Data Grid uses a scheduler to periodically remove expired entries. Entries that are expired but not yet removed are immediately removed on access; in this case get() calls for expired entries return "null" values. [✓] Eviction is local to Data Grid nodes. [✓] Expiration takes place across Data Grid clusters. [✓] You can use eviction and expiration together or independently of each other. [✓] You can configure eviction and expiration declaratively in infinispan.xml to apply cache-wide defaults for entries. [✓] You can explicitly define expiration settings for specific entries but you cannot define eviction on a per-entry basis. [✓] You can manually evict entries and manually trigger expiration. 5.3. Eviction with Data Grid caches Eviction lets you control the size of the data container by removing entries from memory in one of two ways: Total number of entries ( max-count ). Maximum amount of memory ( max-size ). Eviction drops one entry from the data container at a time and is local to the node on which it occurs. Important Eviction removes entries from memory but not from persistent cache stores. To ensure that entries remain available after Data Grid evicts them, and to prevent inconsistencies with your data, you should configure persistent storage. When you configure memory , Data Grid approximates the current memory usage of the data container. When entries are added or modified, Data Grid compares the current memory usage of the data container to the maximum size. If the size exceeds the maximum, Data Grid performs eviction. Eviction happens immediately in the thread that adds an entry that exceeds the maximum size. 5.3.1. Eviction strategies When you configure Data Grid eviction you specify: The maximum size of the data container. A strategy for removing entries when the cache reaches the threshold. You can either perform eviction manually or configure Data Grid to do one of the following: Remove old entries to make space for new ones. Throw ContainerFullException and prevent new entries from being created. The exception eviction strategy works only with transactional caches that use 2 phase commits; not with 1 phase commits or synchronization optimizations. Refer to the schema reference for more details about the eviction strategies. Note Data Grid includes the Caffeine caching library that implements a variation of the Least Frequently Used (LFU) cache replacement algorithm known as TinyLFU. For off-heap storage, Data Grid uses a custom implementation of the Least Recently Used (LRU) algorithm. Additional resources Caffeine Data Grid configuration schema reference 5.3.2. Configuring maximum count eviction Limit the size of Data Grid caches to a total number of entries. Procedure Open your Data Grid configuration for editing. Specify the total number of entries that caches can contain before Data Grid performs eviction with either the max-count attribute or maxCount() method. Set one of the following as the eviction strategy to control how Data Grid removes entries with the when-full attribute or whenFull() method. REMOVE Data Grid performs eviction. This is the default strategy. MANUAL You perform eviction manually for embedded caches. EXCEPTION Data Grid throws an exception instead of evicting entries. Save and close your Data Grid configuration. Maximum count eviction In the following example, Data Grid removes an entry when the cache contains a total of 500 entries and a new entry is created: XML <distributed-cache> <memory max-count="500" when-full="REMOVE"/> </distributed-cache> JSON { "distributed-cache" : { "memory" : { "max-count" : "500", "when-full" : "REMOVE" } } } YAML distributedCache: memory: maxCount: "500" whenFull: "REMOVE" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().maxCount(500).whenFull(EvictionStrategy.REMOVE); Additional resources Data Grid configuration schema reference org.infinispan.configuration.cache.MemoryConfigurationBuilder 5.3.3. Configuring maximum size eviction Limit the size of Data Grid caches to a maximum amount of memory. Procedure Open your Data Grid configuration for editing. Specify application/x-protostream as the media type for cache encoding. You must specify a binary media type to use maximum size eviction. Configure the maximum amount of memory, in bytes, that caches can use before Data Grid performs eviction with the max-size attribute or maxSize() method. Optionally specify a byte unit of measurement. The default is B (bytes). Refer to the configuration schema for supported units. Set one of the following as the eviction strategy to control how Data Grid removes entries with either the when-full attribute or whenFull() method. REMOVE Data Grid performs eviction. This is the default strategy. MANUAL You perform eviction manually for embedded caches. EXCEPTION Data Grid throws an exception instead of evicting entries. Save and close your Data Grid configuration. Maximum size eviction In the following example, Data Grid removes an entry when the size of the cache reaches 1.5 GB (gigabytes) and a new entry is created: XML <distributed-cache> <encoding media-type="application/x-protostream"/> <memory max-size="1.5GB" when-full="REMOVE"/> </distributed-cache> JSON { "distributed-cache" : { "encoding" : { "media-type" : "application/x-protostream" }, "memory" : { "max-size" : "1.5GB", "when-full" : "REMOVE" } } } YAML distributedCache: encoding: mediaType: "application/x-protostream" memory: maxSize: "1.5GB" whenFull: "REMOVE" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.encoding().mediaType("application/x-protostream") .memory() .maxSize("1.5GB") .whenFull(EvictionStrategy.REMOVE); Additional resources Data Grid configuration schema reference org.infinispan.configuration.cache.EncodingConfiguration org.infinispan.configuration.cache.MemoryConfigurationBuilder Cache Encoding and Marshalling 5.3.4. Manual eviction If you choose the manual eviction strategy, Data Grid does not perform eviction. You must do so manually with the evict() method. You should use manual eviction with embedded caches only. For remote caches, you should always configure Data Grid with the REMOVE or EXCEPTION eviction strategy. Note This configuration prevents a warning message when you enable passivation but do not configure eviction. XML <distributed-cache> <memory max-count="500" when-full="MANUAL"/> </distributed-cache> JSON { "distributed-cache" : { "memory" : { "max-count" : "500", "when-full" : "MANUAL" } } } YAML distributedCache: memory: maxCount: "500" whenFull: "MANUAL" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.encoding().mediaType("application/x-protostream") .memory() .maxSize("1.5GB") .whenFull(EvictionStrategy.REMOVE); 5.3.5. Passivation with eviction Passivation persists data to cache stores when Data Grid evicts entries. You should always enable eviction if you enable passivation, as in the following examples: XML <distributed-cache> <persistence passivation="true"> <!-- Persistent storage configuration. --> </persistence> <memory max-count="100"/> </distributed-cache> JSON { "distributed-cache": { "memory" : { "max-count" : "100" }, "persistence" : { "passivation" : true } } } YAML distributedCache: memory: maxCount: "100" persistence: passivation: "true" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().maxCount(100); builder.persistence().passivation(true); //Persistent storage configuration 5.4. Expiration with lifespan and maximum idle Expiration configures Data Grid to remove entries from caches when they reach one of the following time limits: Lifespan Sets the maximum amount of time that entries can exist. Maximum idle Specifies how long entries can remain idle. If operations do not occur for entries, they become idle. Important Maximum idle expiration does not currently support caches with persistent storage. Note If you use expiration and eviction with the EXCEPTION eviction strategy, entries that are expired, but not yet removed from the cache, count towards the size of the data container. 5.4.1. How expiration works When you configure expiration, Data Grid stores keys with metadata that determines when entries expire. Lifespan uses a creation timestamp and the value for the lifespan configuration property. Maximum idle uses a last used timestamp and the value for the max-idle configuration property. Data Grid checks if lifespan or maximum idle metadata is set and then compares the values with the current time. If (creation + lifespan < currentTime) or (lastUsed + maxIdle < currentTime) then Data Grid detects that the entry is expired. Expiration occurs whenever entries are accessed or found by the expiration reaper. For example, k1 reaches the maximum idle time and a client makes a Cache.get(k1) request. In this case, Data Grid detects that the entry is expired and removes it from the data container. The Cache.get(k1) request returns null . Data Grid also expires entries from cache stores, but only with lifespan expiration. Maximum idle expiration does not work with cache stores. In the case of cache loaders, Data Grid cannot expire entries because loaders can only read from external storage. Note Data Grid adds expiration metadata as long primitive data types to cache entries. This can increase the size of keys by as much as 32 bytes. 5.4.2. Expiration reaper Data Grid uses a reaper thread that runs periodically to detect and remove expired entries. The expiration reaper ensures that expired entries that are no longer accessed are removed. The Data Grid ExpirationManager interface handles the expiration reaper and exposes the processExpiration() method. In some cases, you can disable the expiration reaper and manually expire entries by calling processExpiration() ; for instance, if you are using local cache mode with a custom application where a maintenance thread runs periodically. Important If you use clustered cache modes, you should never disable the expiration reaper. Data Grid always uses the expiration reaper when using cache stores. In this case you cannot disable it. Additional resources org.infinispan.configuration.cache.ExpirationConfigurationBuilder org.infinispan.expiration.ExpirationManager 5.4.3. Maximum idle and clustered caches Because maximum idle expiration relies on the last access time for cache entries, it has some limitations with clustered cache modes. With lifespan expiration, the creation time for cache entries provides a value that is consistent across clustered caches. For example, the creation time for k1 is always the same on all nodes. For maximum idle expiration with clustered caches, last access time for entries is not always the same on all nodes. To ensure that entries have the same relative access times across clusters, Data Grid sends touch commands to all owners when keys are accessed. The touch commands that Data Grid send have the following considerations: Cache.get() requests do not return until all touch commands complete. This synchronous behavior increases latency of client requests. The touch command also updates the "recently accessed" metadata for cache entries on all owners, which Data Grid uses for eviction. Additional information Maximum idle expiration does not work with invalidation mode. Iteration across a clustered cache can return expired entries that have exceeded the maximum idle time limit. This behavior ensures performance because no remote invocations are performed during the iteration. Also note that iteration does not refresh any expired entries. 5.4.4. Configuring lifespan and maximum idle times for caches Set lifespan and maximum idle times for all entries in a cache. Procedure Open your Data Grid configuration for editing. Specify the amount of time, in milliseconds, that entries can stay in the cache with the lifespan attribute or lifespan() method. Specify the amount of time, in milliseconds, that entries can remain idle after last access with the max-idle attribute or maxIdle() method. Save and close your Data Grid configuration. Expiration for Data Grid caches In the following example, Data Grid expires all cache entries after 5 seconds or 1 second after the last access time, whichever happens first: XML <replicated-cache> <expiration lifespan="5000" max-idle="1000" /> </replicated-cache> JSON { "replicated-cache" : { "expiration" : { "lifespan" : "5000", "max-idle" : "1000" } } } YAML replicatedCache: expiration: lifespan: "5000" maxIdle: "1000" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.expiration().lifespan(5000, TimeUnit.MILLISECONDS) .maxIdle(1000, TimeUnit.MILLISECONDS); 5.4.5. Configuring lifespan and maximum idle times per entry Specify lifespan and maximum idle times for individual entries. When you add lifespan and maximum idle times to entries, those values take priority over expiration configuration for caches. Note When you explicitly define lifespan and maximum idle time values for cache entries, Data Grid replicates those values across the cluster along with the cache entries. Likewise, Data Grid writes expiration values along with the entries to persistent storage. Procedure For remote caches, you can add lifespan and maximum idle times to entries interactively with the Data Grid Console. With the Data Grid Command Line Interface (CLI), use the --max-idle= and --ttl= arguments with the put command. For both remote and embedded caches, you can add lifespan and maximum idle times with cache.put() invocations. //Lifespan of 5 seconds. //Maximum idle time of 1 second. cache.put("hello", "world", 5, TimeUnit.SECONDS, 1, TimeUnit.SECONDS); //Lifespan is disabled with a value of -1. //Maximum idle time of 1 second. cache.put("hello", "world", -1, TimeUnit.SECONDS, 1, TimeUnit.SECONDS); Additional resources org.infinispan.configuration.cache.ExpirationConfigurationBuilder org.infinispan.expiration.ExpirationManager 5.5. JVM heap and off-heap memory Data Grid stores cache entries in JVM heap memory by default. You can configure Data Grid to use off-heap storage, which means that your data occupies native memory outside the managed JVM memory space. The following diagram is a simplified illustration of the memory space for a JVM process where Data Grid is running: Figure 5.1. JVM memory space JVM heap memory The heap is divided into young and old generations that help keep referenced Java objects and other application data in memory. The GC process reclaims space from unreachable objects, running more frequently on the young generation memory pool. When Data Grid stores cache entries in JVM heap memory, GC runs can take longer to complete as you start adding data to your caches. Because GC is an intensive process, longer and more frequent runs can degrade application performance. Off-heap memory Off-heap memory is native available system memory outside JVM memory management. The JVM memory space diagram shows the Metaspace memory pool that holds class metadata and is allocated from native memory. The diagram also represents a section of native memory that holds Data Grid cache entries. Off-heap memory: Uses less memory per entry. Improves overall JVM performance by avoiding Garbage Collector (GC) runs. One disadvantage, however, is that JVM heap dumps do not show entries stored in off-heap memory. 5.5.1. Off-heap data storage When you add entries to off-heap caches, Data Grid dynamically allocates native memory to your data. Data Grid hashes the serialized byte[] for each key into buckets that are similar to a standard Java HashMap . Buckets include address pointers that Data Grid uses to locate entries that you store in off-heap memory. Important Even though Data Grid stores cache entries in native memory, run-time operations require JVM heap representations of those objects. For instance, cache.get() operations read objects into heap memory before returning. Likewise, state transfer operations hold subsets of objects in heap memory while they take place. Object equality Data Grid determines equality of Java objects in off-heap storage using the serialized byte[] representation of each object instead of the object instance. Data consistency Data Grid uses an array of locks to protect off-heap address spaces. The number of locks is twice the number of cores and then rounded to the nearest power of two. This ensures that there is an even distribution of ReadWriteLock instances to prevent write operations from blocking read operations. 5.5.2. Configuring off-heap memory Configure Data Grid to store cache entries in native memory outside the JVM heap space. Procedure Open your Data Grid configuration for editing. Set OFF_HEAP as the value for the storage attribute or storage() method. Set a boundary for the size of the cache by configuring eviction. Save and close your Data Grid configuration. Off-heap storage Data Grid stores cache entries as bytes in native memory. Eviction happens when there are 100 entries in the data container and Data Grid gets a request to create a new entry: XML <replicated-cache> <memory storage="OFF_HEAP" max-count="500"/> </replicated-cache> JSON { "replicated-cache" : { "memory" : { "storage" : "OFF_HEAP", "max-count" : "500" } } } YAML replicatedCache: memory: storage: "OFF_HEAP" maxCount: "500" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().storage(StorageType.OFF_HEAP).maxCount(500); Additional resources Data Grid configuration schema reference org.infinispan.configuration.cache.MemoryConfigurationBuilder
[ "<distributed-cache> <memory storage=\"HEAP\"/> </distributed-cache>", "{ \"distributed-cache\": { \"memory\" : { \"storage\": \"HEAP\" } } }", "distributedCache: memory: storage: \"HEAP\"", "<distributed-cache> <memory max-count=\"500\" when-full=\"REMOVE\"/> </distributed-cache>", "{ \"distributed-cache\" : { \"memory\" : { \"max-count\" : \"500\", \"when-full\" : \"REMOVE\" } } }", "distributedCache: memory: maxCount: \"500\" whenFull: \"REMOVE\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().maxCount(500).whenFull(EvictionStrategy.REMOVE);", "<distributed-cache> <encoding media-type=\"application/x-protostream\"/> <memory max-size=\"1.5GB\" when-full=\"REMOVE\"/> </distributed-cache>", "{ \"distributed-cache\" : { \"encoding\" : { \"media-type\" : \"application/x-protostream\" }, \"memory\" : { \"max-size\" : \"1.5GB\", \"when-full\" : \"REMOVE\" } } }", "distributedCache: encoding: mediaType: \"application/x-protostream\" memory: maxSize: \"1.5GB\" whenFull: \"REMOVE\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.encoding().mediaType(\"application/x-protostream\") .memory() .maxSize(\"1.5GB\") .whenFull(EvictionStrategy.REMOVE);", "<distributed-cache> <memory max-count=\"500\" when-full=\"MANUAL\"/> </distributed-cache>", "{ \"distributed-cache\" : { \"memory\" : { \"max-count\" : \"500\", \"when-full\" : \"MANUAL\" } } }", "distributedCache: memory: maxCount: \"500\" whenFull: \"MANUAL\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.encoding().mediaType(\"application/x-protostream\") .memory() .maxSize(\"1.5GB\") .whenFull(EvictionStrategy.REMOVE);", "<distributed-cache> <persistence passivation=\"true\"> <!-- Persistent storage configuration. --> </persistence> <memory max-count=\"100\"/> </distributed-cache>", "{ \"distributed-cache\": { \"memory\" : { \"max-count\" : \"100\" }, \"persistence\" : { \"passivation\" : true } } }", "distributedCache: memory: maxCount: \"100\" persistence: passivation: \"true\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().maxCount(100); builder.persistence().passivation(true); //Persistent storage configuration", "<replicated-cache> <expiration lifespan=\"5000\" max-idle=\"1000\" /> </replicated-cache>", "{ \"replicated-cache\" : { \"expiration\" : { \"lifespan\" : \"5000\", \"max-idle\" : \"1000\" } } }", "replicatedCache: expiration: lifespan: \"5000\" maxIdle: \"1000\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.expiration().lifespan(5000, TimeUnit.MILLISECONDS) .maxIdle(1000, TimeUnit.MILLISECONDS);", "//Lifespan of 5 seconds. //Maximum idle time of 1 second. cache.put(\"hello\", \"world\", 5, TimeUnit.SECONDS, 1, TimeUnit.SECONDS); //Lifespan is disabled with a value of -1. //Maximum idle time of 1 second. cache.put(\"hello\", \"world\", -1, TimeUnit.SECONDS, 1, TimeUnit.SECONDS);", "<replicated-cache> <memory storage=\"OFF_HEAP\" max-count=\"500\"/> </replicated-cache>", "{ \"replicated-cache\" : { \"memory\" : { \"storage\" : \"OFF_HEAP\", \"max-count\" : \"500\" } } }", "replicatedCache: memory: storage: \"OFF_HEAP\" maxCount: \"500\"", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.memory().storage(StorageType.OFF_HEAP).maxCount(500);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/configuring-memory-usage
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/providing-feedback
Chapter 14. Allowing JavaScript-based access to the API server from additional hosts
Chapter 14. Allowing JavaScript-based access to the API server from additional hosts 14.1. Allowing JavaScript-based access to the API server from additional hosts The default OpenShift Container Platform configuration only allows the web console to send requests to the API server. If you need to access the API server or OAuth server from a JavaScript application using a different hostname, you can configure additional hostnames to allow. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver.config.openshift.io cluster Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional hostnames: apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-07-11T17:35:37Z" generation: 1 name: cluster resourceVersion: "907" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\.subdomain\.domain\.com(:|\z) 1 1 The hostname is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server. Note This example uses the following syntax: The (?i) makes it case-insensitive. The // pins to the beginning of the domain and matches the double slash following http: or https: . The \. escapes dots in the domain name. The (:|\z) matches the end of the domain name (\z) or a port separator (:) . Save the file to apply the changes.
[ "oc edit apiserver.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/allowing-javascript-based-access-api-server
Chapter 3. Installing a cluster on IBM Power in a restricted network
Chapter 3. Installing a cluster on IBM Power in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster on IBM Power(R) infrastructure that you provision in a restricted network. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are performed on a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 3.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM Power(R)9 or IBM Power(R)10 processor-based systems Note Support for RHCOS functionality for all IBM Power(R)8 models, IBM Power(R) AC922, IBM Power(R) IC922, and IBM Power(R) LC922 is deprecated in OpenShift Container Platform 4.15. Red Hat recommends that you use later hardware models. Hardware requirements Six logical partitions (LPARs) across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or Power10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.4. Recommended IBM Power system requirements Hardware requirements Six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or IBM Power(R)10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 3.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 3.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) 3.8.1. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Simultaneous multithreading (SMT) is not supported. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 3.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 3.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 3.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 3.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.11.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.11.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 3.11.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.11.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform version 4.15, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 3.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 3.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.15.2.2. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 3.17. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power/installing-restricted-networks-ibm-power
9.3. Payment Card Industry Data Security Standard (PCI DSS)
9.3. Payment Card Industry Data Security Standard (PCI DSS) From https://www.pcisecuritystandards.org/ : The PCI Security Standards Council is an open global forum, launched in 2006, that is responsible for the development, management, education, and awareness of the PCI Security Standards, including the Data Security Standard (DSS). You can download the PCI DSS standard from https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-payment_card_industry_data_security_standard
Appendix B. Applying custom configuration to Red Hat Satellite
Appendix B. Applying custom configuration to Red Hat Satellite When you install and configure Satellite for the first time using satellite-installer , you can specify that the DNS and DHCP configuration files are not to be managed by Puppet using the installer flags --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false . If these flags are not specified during the initial installer run, rerunning of the installer overwrites all manual changes, for example, rerun for upgrade purposes. If changes are overwritten, you must run the restore procedure to restore the manual changes. For more information, see Restoring Manual Changes Overwritten by a Puppet Run . To view all installer flags available for custom configuration, run satellite-installer --scenario satellite --full-help . Some Puppet classes are not exposed to the Satellite installer. To manage them manually and prevent the installer from overwriting their values, specify the configuration values by adding entries to configuration file /etc/foreman-installer/custom-hiera.yaml . This configuration file is in YAML format, consisting of one entry per line in the format of <puppet class>::<parameter name>: <value> . Configuration values specified in this file persist across installer reruns. Common examples include: For Apache, to set the ServerTokens directive to only return the Product name: To turn off the Apache server signature entirely: The Puppet modules for the Satellite installer are stored under /usr/share/foreman-installer/modules . Check the .pp files (for example: moduleName /manifests/ example .pp) to look up the classes, parameters, and values. Alternatively, use the grep command to do keyword searches. Setting some values may have unintended consequences that affect the performance or functionality of Red Hat Satellite. Consider the impact of the changes before you apply them, and test the changes in a non-production environment first. If you do not have a non-production Satellite environment, run the Satellite installer with the --noop and --verbose options. If your changes cause problems, remove the offending lines from custom-hiera.yaml and rerun the Satellite installer. If you have any specific questions about whether a particular value is safe to alter, contact Red Hat support.
[ "apache::server_tokens: Prod", "apache::server_signature: Off" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/applying-custom-configuration_satellite
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from earlier Red Hat build of OpenJDK 17 releases. Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in earlier releases of Red Hat build of OpenJDK. Fix for jpackage tool producing an inaccurate list of required packages on Debian Linux In earlier releases, on Debian Linux platforms, the jpackage tool could produce an inaccurate list of required packages from shared libraries with symbolic links in their paths. Red Hat build of OpenJDK 17.0.13 resolves this issue to eliminate installation failures that could result from missing shared libraries. See JDK-8295111 (JDK Bug System) . TLS_ECDH_* cipher suites are disabled by default The TLS Elliptic-curve Diffie-Hellman (TLS_ECDH) cipher suites do not preserve forward secrecy and they are rarely used. Red Hat build of OpenJDK 17.0.13 disables the TLS_ECDH cipher suites by adding the ECDH option to the jdk.tls.disabledAlgorithms security property in the java.security configuration file. If you attempt to use the TLS_ECDH cipher suites, Red Hat build of OpenJDK now throws an SSLHandshakeException error. If you want to continue using the TLS_ECDH cipher suites, you can remove ECDH from the jdk.tls.disabledAlgorithms security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the TLS_ECDH cipher suites is at your own risk. ECDH cipher suites that use RC4 were disabled in an earlier release. This change does not affect the TLS_ECDHE cipher suites, which remain enabled by default. See JDK-8279164 (JDK Bug System) . Distrust of TLS server certificates issued after 11 November 2024 and anchored by Entrust root CAs In accordance with similar plans that Google and Mozilla recently announced, Red Hat build of OpenJDK 17.0.13 distrusts TLS certificates that are issued after 11 November 2024 and anchored by Entrust root certificate authorities (CAs). This change in behavior includes any certificates that are branded as AffirmTrust, which are managed by Entrust. Red Hat build of OpenJDK will continue to trust certificates that are issued on or before 11 November 2024 until these certificates expire. If a server's certificate chain is anchored by an affected certificate, any attempts to negotiate a TLS session now fail with an exception to indicate that the trust anchor is not trusted. For example: You can check whether this change affects a certificate in a JDK keystore by using the following keytool command: keytool -v -list -alias <your_server_alias> -keystore <your_keystore_filename> If this change affects any certificate in the chain, update this certificate or contact the organization that is responsible for managing the certificate. If you want to continue using TLS server certificates that are anchored by Entrust root certificates, you can remove ENTRUST_TLS from the jdk.security.caDistrustPolicies security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the distrusted TLS server certificates is at your own risk. These restrictions apply to the following Entrust root certificates that Red Hat build of OpenJDK includes: Certificate 1 Alias name: entrustevca [jdk] Distinguished name: CN=Entrust Root Certification Authority, OU=(c) 2006 Entrust, Inc., OU=www.entrust.net/CPS is incorporated by reference, O=Entrust, Inc., C=US SHA256: 73:C1:76:43:4F:1B:C6:D5:AD:F4:5B:0E:76:E7:27:28:7C:8D:E5:76:16:C1:E6:E6:14:1A:2B:2C:BC:7D:8E:4C Certificate 2 Alias name: entrustrootcaec1 [jdk] Distinguished name: CN=Entrust Root Certification Authority - EC1, OU=(c) 2012 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 02:ED:0E:B2:8C:14:DA:45:16:5C:56:67:91:70:0D:64:51:D7:FB:56:F0:B2:AB:1D:3B:8E:B0:70:E5:6E:DF:F5 Certificate 3 Alias name: entrustrootcag2 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G2, OU=(c) 2009 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 43:DF:57:74:B0:3E:7F:EF:5F:E4:0D:93:1A:7B:ED:F1:BB:2E:6B:42:73:8C:4E:6D:38:41:10:3D:3A:A7:F3:39 Certificate 4 Alias name: entrustrootcag4 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G4, OU=(c) 2015 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-termsO=Entrust, Inc., C=US SHA256: DB:35:17:D1:F6:73:2A:2D:5A:B9:7C:53:3E:C7:07:79:EE:32:70:A6:2F:B4:AC:42:38:37:24:60:E6:F0:1E:88 Certificate 5 Alias name: entrust2048ca [jdk] Distinguished name: CN=Entrust.net Certification Authority (2048), OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.), O=Entrust.net SHA256: 6D:C4:71:72:E0:1C:BC:B0:BF:62:58:0D:89:5F:E2:B8:AC:9A:D4:F8:73:80:1E:0C:10:B9:C8:37:D2:1E:B1:77 Certificate 6 Alias name: affirmtrustcommercialca [jdk] Distinguished name: CN=AffirmTrust Commercial, O=AffirmTrust, C=US SHA256: 03:76:AB:1D:54:C5:F9:80:3C:E4:B2:E2:01:A0:EE:7E:EF:7B:57:B6:36:E8:A9:3C:9B:8D:48:60:C9:6F:5F:A7 Certificate 7 Alias name: affirmtrustnetworkingca [jdk] Distinguished name: CN=AffirmTrust Networking, O=AffirmTrust, C=US SHA256: 0A:81:EC:5A:92:97:77:F1:45:90:4A:F3:8D:5D:50:9F:66:B5:E2:C5:8F:CD:B5:31:05:8B:0E:17:F3:F0B4:1B Certificate 8 Alias name: affirmtrustpremiumca [jdk] Distinguished name: CN=AffirmTrust Premium, O=AffirmTrust, C=US SHA256: 70:A7:3F:7F:37:6B:60:07:42:48:90:45:34:B1:14:82:D5:BF:0E:69:8E:CC:49:8D:F5:25:77:EB:F2:E9:3B:9A Certificate 9 Alias name: affirmtrustpremiumeccca [jdk] Distinguished name: CN=AffirmTrust Premium ECCO=AffirmTrust, C=US SHA256: BD:71:FD:F6:DA:97:E4:CF:62:D1:64:7A:DD:25:81:B0:7D:79:AD:F8:39:7E:B4:EC:BA:9C:5E:84:88:82:14:23 See JDK-8337664 (JDK Bug System) and JDK-8341059 (JDK Bug System) . Reduced verbose locale output in -XshowSettings launcher option In earlier releases, the -XshowSettings launcher option printed a long list of available locales, which obscured other settings. In Red Hat build of OpenJDK 17.0.13, the -XshowSettings launcher option no longer prints the list of available locales by default. If you want to view all settings that relate to the available locales, you can use the -XshowSettings:locale option. See JDK-8310201 (JDK Bug System) . SSL.com root certificates added In Red Hat build of OpenJDK 17.0.13, the cacerts truststore includes two SSL.com TLS root certificates: Certificate 1 Name: SSL.com Alias name: ssltlsrootecc2022 Distinguished name: CN=SSL.com TLS ECC Root CA 2022, O=SSL Corporation, C=US Certificate 2 Name: SSL.com Alias name: ssltlsrootrsa2022 Distinguished name: CN=SSL.com TLS RSA Root CA 2022, O=SSL Corporation, C=US See JDK-8341057 (JDK Bug System) . Additional timestamp and thread options for java.security.debug system property Red Hat build of OpenJDK 17.0.13 adds the following options to the java.security.debug property, which can be applied to any specified component: The +timestamp option prints a timestamp with each debug statement. The +thread option prints thread and caller information for each debug statement. For example, -Djava.security.debug=all+timestamp+thread enables debug information for all components with both timestamps and thread information. Alternatively, -Djava.security.debug=properties+timestamp enables debug information only for security properties and includes a timestamp. You can use -Djava.security.debug=help to display a complete list of supported components and options. See JDK-8051959 (JDK Bug System) . Relaxation of Java Abstract Window Toolkit (AWT) Robot specification Red Hat build of OpenJDK 17.0.13 is based on the latest maintenance release of the Java 17 specification. This release relaxes the specification of the following three methods in the java.awt.Robot class: mouseMove(int,int) getPixelColor(int,int) createScreenCapture(Rectangle) This relaxation of the specification allows these methods to fail when the desktop environment does not permit moving the mouse pointer or capturing screen content. See JDK-8307779 (JDK Bug System) . Changes to com.sun.jndi.ldap.object.trustSerialData system property The JDK implementation of the LDAP provider no longer supports the deserialization of Java objects by default. In Red Hat build of OpenJDK 17.0.13, the com.sun.jndi.ldap.object.trustSerialData system property is set to false by default. This release also increases the scope of the com.sun.jndi.ldap.object.trustSerialData property to cover the reconstruction of RMI remote objects from the javaRemoteLocation LDAP attribute. These changes mean that transparent deserialization of Java objects now requires an explicit opt-in. From Red Hat build of OpenJDK 17.0.13 onward, if you want to allow applications to reconstruct Java objects and RMI stubs from LDAP attributes, you must explicitly set the com.sun.jndi.ldap.object.trustSerialData property to true . See JDK-8290367 (JDK Bug System) and JDK-8332643 (JDK Bug System). HTTP client enhancements Red Hat build of OpenJDK 17.0.13 limits the maximum header field size that the HTTP client accepts within the JDK for all supported versions of the HTTP protocol. The header field size is computed as the sum of the size of the uncompressed header name, the size of the uncompressed header value, and an overhead of 32 bytes for each field section line. If a peer sends a field section that exceeds this limit, a java.net.ProtocolException is raised. Red Hat build of OpenJDK 17.0.13 introduces a jdk.http.maxHeaderSize system property that you can use to change the maximum header field size (in bytes). Alternatively, you can disable the maximum header field size by setting the jdk.http.maxHeaderSize property to zero or a negative value. The jdk.http.maxHeaderSize property is set to 393,216 bytes (that is, 384KB) by default. See JDK-8328286 (JDK Bug System). ClassLoadingMXBean and MemoryMXBean APIs have isVerbose() methods consistent with their setVerbose() methods The setVerbose(boolean enabled) method for the ClassLoadingMXBean API displays the following behavior: If enabled is true , the setVerbose method sets class+load* logging on standard output (stdout) at the Info level. If enabled is false , the setVerbose method disables class+load* logging on stdout. In earlier releases, the isVerbose() method for the ClassLoadingMXBean API checked if class+load logging was enabled at the Info level on any type of log output, not just stdout. In this situation, if you enabled logging to a file by using the java -Xlog option, the isVerbose() method returned true even if setVerbose(false) was called, which resulted in counterintuitive behavior. The isVerbose() method for the MemoryMXBean API also displayed similar counterintuitive behavior. From Red Hat build of OpenJDK 17.0.13 onward, the ClassLoadingMXBean.isVerbose() and MemoryMXBean.isVerbose() methods display the following behavior: ClassLoadingMXBean::isVerbose() returns true only if class+load* logging is enabled at the Info level (or higher) specifically on standard output (stdout). MemoryMXBean::isVerbose() returns true only if garbage collector logging is enabled at the Info level (or higher) on stdout. See JDK-8338139 (JDK Bug System) . Key encapsulation mechanism API Red Hat build of OpenJDK 17.0.13 introduces a javax.crypto API for key encapsulation mechanisms (KEMs). KEM is an encryption technique for securing symmetric keys by using public key cryptography. The javax.crypto API facilitates the use of public key cryptography and helps to improve security when handling secrets and messages. See JDK-8297878 (JDK Bug System) . Changes to naming convention for Windows build artifacts This release introduces naming changes for some files that are distributed as part of Red Hat build of OpenJDK 17 for Windows Server platforms. These file naming changes affect both the .zip archive and .msi installers that Red Hat provides for the JDK, JRE and debuginfo packages for Red Hat build of OpenJDK 17. The aim of this change is to adopt a common naming convention that is consistent across the different versions of OpenJDK that Red Hat currently supports. This means that Red Hat build of OpenJDK 17 is now aligned with the naming convention that Red Hat has already adopted for Red Hat build of OpenJDK 21. These changes do not affect the files for Linux portable builds. The following list provides an example of how these naming changes affect files in Red Hat build of OpenJDK 17: MSI installer for JDK package Old naming format for Red Hat build of OpenJDK 17.0.12 or earlier: java-17-openjdk- <version> . <build> .win.x86_64.msi New naming format for Red Hat build of OpenJDK 17.0.13 or later: java-17-openjdk- <version> . <build> .win.jdk.x86_64.msi .zip archive for JDK package Old naming format for Red Hat build of OpenJDK 17.0.12 or earlier: java-17-openjdk- <version> . <build> .win.x86_64.zip New naming format for Red Hat build of OpenJDK 17.0.13 or later: java-17-openjdk- <version> . <build> .win.jdk.x86_64.zip MSI installer for JRE package Old naming format for Red Hat build of OpenJDK 17.0.12 or earlier: java-17-openjdk- <version> . <build> .jre.win.x86_64.msi New naming format for Red Hat build of OpenJDK 17.0.13 or later: java-17-openjdk- <version> . <build> .win.jre.x86_64.msi .zip archive for JRE package Old naming format for Red Hat build of OpenJDK 17.0.12 or earlier: java-17-openjdk- <version> . <build> .jre.win.x86_64.zip New naming format for Red Hat build of OpenJDK 17.0.13 or later: java-17-openjdk- <version> . <build> .win.jre.x86_64.zip .zip archive for debuginfo package Old naming format for Red Hat build of OpenJDK 17.0.12 or earlier: java-17-openjdk- <version> . <build> .win.x86_64.debuginfo.zip New naming format for Red Hat build of OpenJDK 17.0.13 or later: java-17-openjdk- <version> . <build> .win.debuginfo.x86_64.zip
[ "TLS server certificate issued after 2024-11-11 and anchored by a distrusted legacy Entrust root CA: CN=Entrust.net CertificationAuthority (2048), OU=(c) 1999 Entrust.net Limited,OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.),O=Entrust.net" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.13/rn_openjdk-17013-features_openjdk
Migrating to Red Hat build of Apache Camel for Spring Boot
Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.8 Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support
[ "camel.springboot.stream-caching-spool-enabled=true", "<circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> </circuitBreaker>", "<circuitBreaker> <resilience4jConfiguration timeoutEnabled=\"true\" timeoutDuration=\"2000\"/> </circuitBreaker>", "<route id=\"myRoute\"> <description>Something that this route do</description> <from uri=\"kafka:cheese\"/> </route>", "<route id=\"myRoute\" description=\"Something that this route do\"> <from uri=\"kafka:cheese\"/> </route>", "<dependency> <groupId>org.apache.camel.springbboot</groupId> <artifactId>camel-xml-io-dsl-starter</artifactId> <version>{CamelSBProjectVersion}</version> </dependency>", "camel.health.producers-enabled = true", "- route: from: uri: \"direct:info\" steps: - log: \"message\"", "- route: from: uri: \"direct:info\" steps: - log: \"message\"", "\"bean:myBean?method=foo(com.foo.MyOrder.class, true)\"", "\"bean:myBean?method=bar(String.class, int.class)\"", "<camelContext id=\"SampleCamel\" xmlns=\"http://camel.apache.org/schema/spring\"> <restConfiguration port=\"8080\" bindingMode=\"off\" scheme=\"http\" contextPath=\"myapi\" apiContextPath=\"myapi/swagger\"> </restConfiguration>", "@SpringBootApplication(exclude = RestConfigurationDefinitionAutoConfiguration.class) // load the spring xml file from classpath @ImportResource(\"classpath:my-camel.xml\") public class SampleCamelApplication {", "from(\"platform-http:myservice\") .to(\"...\")", "<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency>", "2015-01-12 13:23:23,656 [- ShutdownTask] INFO DefaultShutdownStrategy - There are 1 inflight exchanges: InflightExchange: [exchangeId=ID-test-air-62213-1421065401253-0-3, fromRouteId=route1, routeId=route1, nodeId=delay1, elapsed=2007, duration=2017]", "context.getShutdownStrategegy().setLogInflightExchangesOnTimeout(false);", "telegram:bots/myTokenHere", "telegram:bots?authorizationToken=myTokenHere", "ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/index
Securing Applications and Services Guide
Securing Applications and Services Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/securing_applications_and_services_guide/index
RPM installation
RPM installation Red Hat Ansible Automation Platform 2.5 Install the RPM version of Ansible Automation Platform Red Hat Customer Content Services
[ "dnf install firewalld", "firewall-cmd --permanent --add-service=<service>", "firewall-cmd --reload", "psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:", "-h hostname --host=hostname", "-d dbname --dbname=dbname", "-U username --username=username", "Automation controller database variables awx_install_pg_host=data.example.com awx_install_pg_port=<port_number> awx_install_pg_database=<database_name> awx_install_pg_username=<username> awx_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database Event-Driven Ansible database variables automationedacontroller_install_pg_host=data.example.com automationedacontroller_install_pg_port=<port_number> automationedacontroller_install_pg_database=<database_name> automationedacontroller_install_pg_username=<username> automationedacontroller_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationedacontroller_pg_sslmode=prefer # Set to verify-full to enable mTLS authentication to the database Automation hub database variables automationhub_pg_host=data.example.com automationhub_pg_port=<port_number> automationhub_pg_database=<database_name> automationhub_pg_username=<username> automationhub_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationhub_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database Platform gateway database variables automationgateway_install_pg_host=data.example.com automationgateway_install_pg_port=<port_number> automationgateway_install_pg_database=<database_name> automationgateway_install_pg_username=<username> automationgateway_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationgateway_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database", "Automation controller database variables awx_install_pg_host=data.example.com awx_install_pg_port=<port_number> awx_install_pg_database=<database_name> awx_install_pg_username=<username> pg_sslmode=verify-full # This can be either verify-ca or verify-full pgclient_sslcert=/path/to/cert # Path to the certificate file pgclient_sslkey=/path/to/key # Path to the key file Event-Driven Ansible database variables automationedacontroller_install_pg_host=data.example.com automationedacontroller_install_pg_port=<port_number> automationedacontroller_install_pg_database=<database_name> automationedacontroller_install_pg_username=<username> automationedacontroller_pg_sslmode=verify-full # EDA does not support verify-ca automationedacontroller_pgclient_sslcert=/path/to/cert # Path to the certificate file automationedacontroller_pgclient_sslkey=/path/to/key # Path to the key file Automation hub database variables automationhub_pg_host=data.example.com automationhub_pg_port=<port_number> automationhub_pg_database=<database_name> automationhub_pg_username=<username> automationhub_pg_sslmode=verify-full # This can be either verify-ca or verify-full automationhub_pgclient_sslcert=/path/to/cert # Path to the certificate file automationhub_pgclient_sslkey=/path/to/key # Path to the key file Platform gateway database variables automationgateway_install_pg_host=data.example.com automationgateway_install_pg_port=<port_number> automationgateway_install_pg_database=<database_name> automationgateway_install_pg_username=<username> automationgateway_pg_sslmode=verify-full # This can be either verify-ca or verify-full automationgateway_pgclient_sslcert=/path/to/cert # Path to the certificate file automationgateway_pgclient_sslkey=/path/to/key # Path to the key file", "psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"", "name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)", "name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)", "dnf install postgresql-contrib", "psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"", "name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)", "yum -y install fio", "fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 > /tmp/fio_benchmark_write_iops.log 2>> /tmp/fio_write_iops_error.log", "fio --name=read_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 > /tmp/fio_benchmark_read_iops.log 2>> /tmp/fio_read_iops_error.log", "cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...]", "cd /opt/ansible-automation-platform/installer/", "cd ansible-automation-platform-setup-bundle-<latest-version>", "cd ansible-automation-platform-setup-<latest-version>", "[automationcontroller] controller.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='data.example.com' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). If not specified, the first node in the [automationgateway] group will be used when needed. automationgateway_main_url = '' Certificate and key to install in Automation Gateway automationgateway_ssl_cert=/path/to/automationgateway.cert automationgateway_ssl_key=/path/to/automationgateway.key SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key", "[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). If not specified, the first node in the [automationgateway] group will be used when needed. automationgateway_main_url = '' Certificate and key to install in Automation Gateway automationgateway_ssl_cert=/path/to/automationgateway.cert automationgateway_ssl_key=/path/to/automationgateway.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key", "[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' Keystore file to install in SSO node sso_custom_keystore_file='/path/to/sso.jks' This install will deploy SSO with sso_use_https=True Keystore password is required for https enabled SSO sso_keystore_password='' This install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). If not specified, the first node in the [automationgateway] group will be used when needed. automationgateway_main_url = '' Certificate and key to install in Automation Gateway automationgateway_ssl_cert=/path/to/automationgateway.cert automationgateway_ssl_key=/path/to/automationgateway.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key Boolean flag used to verify Automation Controller's web certificates when making calls from Automation Event-Driven Ansible controller. automationedacontroller_controller_verify_ssl = true # Certificate and key to install in Automation Event-Driven Ansible controller node automationedacontroller_ssl_cert=/path/to/automationeda.crt automationedacontroller_ssl_key=/path/to/automationeda.key", "automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432", "[database] 192.0.2.10", "[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user automationhub2.testing.ansible.com ansible_user=cloud-user automationhub3.testing.ansible.com ansible_user=cloud-user", "USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True", "mkdir /var/lib/pulp/", "srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:var_lib_t:s0\" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:httpd_sys_content_rw_t:s0\" 0 0", "systemctl daemon-reload", "mount /var/lib/pulp", "mkdir /var/lib/pulp/pulpcore_static", "mount -a", "setup.sh -- -b --become-user root", "systemctl stop pulpcore.service", "systemctl edit pulpcore.service", "[Unit] After=network.target var-lib-pulp.mount", "systemctl enable remote-fs.target", "systemctl reboot", "chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem", "systemctl stop pulpcore.service", "umount /var/lib/pulp/pulpcore_static", "umount /var/lib/pulp/", "srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:pulpcore_var_lib_t:s0\" 0 0", "mount -a", "{\"file\": \"filename\", \"signature\": \"filename.asc\"}", "#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi", "[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh", "automationedacontroller_additional_settings: SAFE_PLUGINS: - ansible.eda.webhook - ansible.eda.alertmanager", "https://access.redhat.com/terms-based-registry/token/<name-of-your-token>", "[all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' redis_mode=cluster", "sudo ./setup.sh", "ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ./setup.sh", "./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_archive_compression=true' 'use_db_compression=true' @credentials.yml -b", "[automationedacontroller] 3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api worker node 3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker", "[automationedacontroller] 3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api 3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker two more worker nodes 3.88.116.113 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker 3.88.116.114 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker", "subscription-manager repos --enable rhel-9-for-x86_64-baseos-rpms --enable rhel-9-for-x86_64-appstream-rpms", "dnf install yum-utils reposync -m --download-metadata --gpgcheck -p /path/to/download", "sudo dnf install httpd", "/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch \"^/+USD\"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>", "sudo chown -R apache /path/to/repos", "sudo semanage fcontext -a -t httpd_sys_content_t \"/path/to/repos(/.*)?\" sudo restorecon -ir /path/to/repos", "sudo systemctl enable --now httpd.service", "sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent sudo firewall-cmd --reload", "[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd", "mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd", "[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release", "Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]", "tar xvf ansible-automation-platform-setup-bundle-2.5-1.tar.gz cd ansible-automation-platform-setup-bundle-2.5-1" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/rpm_installation/index
Chapter 7. Security Profiles Operator
Chapter 7. Security Profiles Operator 7.1. Security Profiles Operator overview OpenShift Container Platform Security Profiles Operator (SPO) provides a way to define secure computing ( seccomp ) profiles and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. For the latest updates, see the release notes . The SPO can distribute custom resources to each node while a reconciliation loop ensures that the profiles stay up-to-date. See Understanding the Security Profiles Operator . The SPO manages SELinux policies and seccomp profiles for namespaced workloads. For more information, see Enabling the Security Profiles Operator . You can create seccomp and SELinux profiles, bind policies to pods, record workloads, and synchronize all worker nodes in a namespace. Use Advanced Security Profile Operator tasks to enable the log enricher, configure webhooks and metrics, or restrict profiles to a single namespace. Troubleshoot the Security Profiles Operator as needed, or engage Red Hat support . You can Uninstall the Security Profiles Operator by removing the profiles before removing the Operator. 7.2. Security Profiles Operator release notes The Security Profiles Operator provides a way to define secure computing ( seccomp ) and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. These release notes track the development of the Security Profiles Operator in OpenShift Container Platform. For an overview of the Security Profiles Operator, see xref:[Security Profiles Operator Overview]. 7.2.1. Security Profiles Operator 0.8.6 The following advisory is available for the Security Profiles Operator 0.8.6: RHBA-2024:10380 - OpenShift Security Profiles Operator update This update includes upgraded dependencies in underlying base images. 7.2.2. Security Profiles Operator 0.8.5 The following advisory is available for the Security Profiles Operator 0.8.5: RHBA-2024:5016 - OpenShift Security Profiles Operator bug fix update 7.2.2.1. Bug fixes When attempting to install the Security Profile Operator from the web console, the option to enable Operator-recommended cluster monitoring was unavailable for the namespace. With this update, you can now enabled Operator-recommend cluster monitoring in the namespace. ( OCPBUGS-37794 ) Previously, the Security Profiles Operator would intermittently be not visible in the OperatorHub, which caused limited access to install the Operator via the web console. With this update, the Security Profiles Operator is present in the OperatorHub. 7.2.3. Security Profiles Operator 0.8.4 The following advisory is available for the Security Profiles Operator 0.8.4: RHBA-2024:4781 - OpenShift Security Profiles Operator bug fix update This update addresses CVEs in underlying dependencies. 7.2.3.1. New features and enhancements You can now specify a default security profile in the image attribute of a ProfileBinding object by setting a wildcard. For more information, see Binding workloads to profiles with ProfileBindings (SELinux) and Binding workloads to profiles with ProfileBindings (Seccomp) . 7.2.4. Security Profiles Operator 0.8.2 The following advisory is available for the Security Profiles Operator 0.8.2: RHBA-2023:5958 - OpenShift Security Profiles Operator bug fix update 7.2.4.1. Bug fixes Previously, SELinuxProfile objects did not inherit custom attributes from the same namespace. With this update, the issue has now been resolved and SELinuxProfile object attributes are inherited from the same namespace as expected. ( OCPBUGS-17164 ) Previously, RawSELinuxProfiles would hang during the creation process and would not reach an Installed state. With this update, the issue has been resolved and RawSELinuxProfiles are created successfully. ( OCPBUGS-19744 ) Previously, patching the enableLogEnricher to true would cause the seccompProfile log-enricher-trace pods to be stuck in a Pending state. With this update, log-enricher-trace pods reach an Installed state as expected. ( OCPBUGS-22182 ) Previously, the Security Profiles Operator generated high cardinality metrics, causing Prometheus pods using high amounts of memory. With this update, the following metrics will no longer apply in the Security Profiles Operator namespace: rest_client_request_duration_seconds rest_client_request_size_bytes rest_client_response_size_bytes ( OCPBUGS-22406 ) 7.2.5. Security Profiles Operator 0.8.0 The following advisory is available for the Security Profiles Operator 0.8.0: RHBA-2023:4689 - OpenShift Security Profiles Operator bug fix update 7.2.5.1. Bug fixes Previously, while trying to install Security Profiles Operator in a disconnected cluster, the secure hashes provided were incorrect due to a SHA relabeling issue. With this update, the SHAs provided work consistently with disconnected environments. ( OCPBUGS-14404 ) 7.2.6. Security Profiles Operator 0.7.1 The following advisory is available for the Security Profiles Operator 0.7.1: RHSA-2023:2029 - OpenShift Security Profiles Operator bug fix update 7.2.6.1. New features and enhancements Security Profiles Operator (SPO) now automatically selects the appropriate selinuxd image for RHEL 8- and 9-based RHCOS systems. Important Users that mirror images for disconnected environments must mirror both selinuxd images provided by the Security Profiles Operator. You can now enable memory optimization inside of an spod daemon. For more information, see Enabling memory optimization in the spod daemon . Note SPO memory optimization is not enabled by default. The daemon resource requirements are now configurable. For more information, see Customizing daemon resource requirements . The priority class name is now configurable in the spod configuration. For more information, see Setting a custom priority class name for the spod daemon pod . 7.2.6.2. Deprecated and removed features The default nginx-1.19.1 seccomp profile is now removed from the Security Profiles Operator deployment. 7.2.6.3. Bug fixes Previously, a Security Profiles Operator (SPO) SELinux policy did not inherit low-level policy definitions from the container template. If you selected another template, such as net_container, the policy would not work because it required low-level policy definitions that only existed in the container template. This issue occurred when the SPO SELinux policy attempted to translate SELinux policies from the SPO custom format to the Common Intermediate Language (CIL) format. With this update, the container template appends to any SELinux policies that require translation from SPO to CIL. Additionally, the SPO SELinux policy can inherit low-level policy definitions from any supported policy template. ( OCPBUGS-12879 ) Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.2.7. Security Profiles Operator 0.5.2 The following advisory is available for the Security Profiles Operator 0.5.2: RHBA-2023:0788 - OpenShift Security Profiles Operator bug fix update This update addresses a CVE in an underlying dependency. Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.2.8. Security Profiles Operator 0.5.0 The following advisory is available for the Security Profiles Operator 0.5.0: RHBA-2022:8762 - OpenShift Security Profiles Operator bug fix update Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.3. Security Profiles Operator support 7.3.1. Security Profiles Operator lifecycle The Security Profiles Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 7.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 7.4. Understanding the Security Profiles Operator OpenShift Container Platform administrators can use the Security Profiles Operator to define increased security measures in clusters. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.4.1. About Security Profiles Security profiles can increase security at the container level in your cluster. Seccomp security profiles list the syscalls a process can make. Permissions are broader than SELinux, enabling users to restrict operations system-wide, such as write . SELinux security profiles provide a label-based system that restricts the access and usage of processes, applications, or files in a system. All files in an environment have labels that define permissions. SELinux profiles can define access within a given structure, such as directories. 7.5. Enabling the Security Profiles Operator Before you can use the Security Profiles Operator, you must ensure the Operator is deployed in the cluster. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. Important The Security Profiles Operator only supports x86_64 architecture. 7.5.1. Installing the Security Profiles Operator Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Security Profiles Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-security-profiles namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Security Profiles Operator is installed in the openshift-security-profiles namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-security-profiles project that are reporting issues. 7.5.2. Installing the Security Profiles Operator using the CLI Prerequisites You must have admin privileges. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: "true" Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-security-profiles namespace, or the namespace where the Security Profiles Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the following CSV file: USD oc get csv -n openshift-security-profiles Verify that the Security Profiles Operator is operational by running the following command: USD oc get deploy -n openshift-security-profiles 7.5.3. Configuring logging verbosity The Security Profiles Operator supports the default logging verbosity of 0 and an enhanced verbosity of 1 . Procedure To enable enhanced logging verbosity, patch the spod configuration and adjust the value by running the following command: USD oc -n openshift-security-profiles patch spod \ spod --type=merge -p '{"spec":{"verbosity":1}}' Example output securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched 7.6. Managing seccomp profiles Create and manage seccomp profiles and bind them to workloads. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.6.1. Creating seccomp profiles Use the SeccompProfile object to create profiles. SeccompProfile objects can restrict syscalls within a container, limiting the access of your application. Procedure Create a project by running the following command: USD oc new-project my-namespace Create the SeccompProfile object: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG The seccomp profile will be saved in /var/lib/kubelet/seccomp/operator/<namespace>/<name>.json . An init container creates the root directory of the Security Profiles Operator to run the Operator without root group or user ID privileges. A symbolic link is created from the rootless profile storage /var/lib/openshift-security-profiles to the default seccomp root path inside of the kubelet root /var/lib/kubelet/seccomp/operator . 7.6.2. Applying seccomp profiles to a pod Create a pod to apply one of the created profiles. Procedure Create a pod object that defines a securityContext : apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 View the profile path of the seccompProfile.localhostProfile attribute by running the following command: USD oc -n my-namespace get seccompprofile profile1 --output wide Example output NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json View the path to the localhost profile by running the following command: USD oc get sp profile1 --output=jsonpath='{.status.localhostProfile}' Example output operator/my-namespace/profile1.json Apply the localhostProfile output to the patch file: spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json Apply the profile to any other workload, such as a Deployment object, by running the following command: USD oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge Example output deployment.apps/myapp patched Verification Confirm the profile was applied correctly by running the following command: USD oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq . Example output { "seccompProfile": { "localhostProfile": "operator/my-namespace/profile1.json", "type": "localhost" } } 7.6.2.1. Binding workloads to profiles with ProfileBindings You can use the ProfileBinding resource to bind a security profile to the SecurityContext of a container. Procedure To bind a pod that uses a quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 image to the example SeccompProfile profile, create a ProfileBinding object in the same namespace with the pod and the SeccompProfile objects: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3 1 The kind: variable refers to the kind of the profile. 2 The name: variable refers to the name of the profile. 3 You can enable a default security profile by using a wildcard in the image attribute: image: "*" Important Using the image: "*" wildcard attribute binds all new pods with a default security profile in a given namespace. Label the namespace with enable-binding=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-binding=true Define a pod named test-pod.yaml : apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Create the pod: USD oc create -f test-pod.yaml Note If the pod already exists, you must re-create the pod for the binding to work properly. Verification Confirm the pod inherits the ProfileBinding by running the following command: USD oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}' Example output {"localhostProfile":"operator/my-namespace/profile.json","type":"Localhost"} 7.6.3. Recording profiles from workloads The Security Profiles Operator can record system calls with ProfileRecording objects, making it easier to create baseline profiles for applications. When using the log enricher for recording seccomp profiles, verify the log enricher feature is enabled. See Additional resources for more information. Note A container with privileged: true security context restraints prevents log-based recording. Privileged containers are not subject to seccomp policies, and log-based recording makes use of a special seccomp profile to record events. Procedure Create a project by running the following command: USD oc new-project my-namespace Label the namespace with enable-recording=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-recording=true Create a ProfileRecording object containing a recorder: logs variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app Create a workload to record: apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 Confirm the pod is in a Running state by entering the following command: USD oc -n my-namespace get pods Example output NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s Confirm the enricher indicates that it receives audit logs for those containers: USD oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher Example output I0523 14:19:08.747313 430694 enricher.go:445] log-enricher "msg"="audit" "container"="redis" "executable"="/usr/local/bin/redis-server" "namespace"="my-namespace" "node"="xiyuan-23-5g2q9-worker-eastus2-6rpgf" "pid"=656802 "pod"="my-pod" "syscallID"=0 "syscallName"="read" "timestamp"="1684851548.745:207179" "type"="seccomp" Verification Remove the pod: USD oc -n my-namepace delete pod my-pod Confirm the Security Profiles Operator reconciles the two seccomp profiles: USD oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for seccompprofile NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s 7.6.3.1. Merging per-container profile instances By default, each container instance records into a separate profile. The Security Profiles Operator can merge the per-container profiles into a single profile. Merging profiles is useful when deploying applications using ReplicaSet or Deployment objects. Procedure Edit a ProfileRecording object to include a mergeStrategy: containers variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record Label the namespace by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true Create the workload with the following YAML: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 To record the individual profiles, delete the deployment by running the following command: USD oc delete deployment nginx-deploy -n my-namespace To merge the profiles, delete the profile recording by running the following command: USD oc delete profilerecording test-recording -n my-namespace To start the merge operation and generate the results profile, run the following command: USD oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for seccompprofiles NAME STATUS AGE test-recording-nginx-record Installed 55s To view the permissions used by any of the containers, run the following command: USD oc get seccompprofiles test-recording-nginx-record -o yaml Additional resources Managing security context constraints Managing SCCs in OpenShift Using the log enricher About security profiles 7.7. Managing SELinux profiles Create and manage SELinux profiles and bind them to workloads. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.7.1. Creating SELinux profiles Use the SelinuxProfile object to create profiles. The SelinuxProfile object has several features that allow for better security hardening and readability: Restricts the profiles to inherit from to the current namespace or a system-wide profile. Because there are typically many profiles installed on the system, but only a subset should be used by cluster workloads, the inheritable system profiles are listed in the spod instance in spec.selinuxOptions.allowedSystemProfiles . Performs basic validation of the permissions, classes and labels. Adds a new keyword @self that describes the process using the policy. This allows reusing a policy between workloads and namespaces easily, as the usage of the policy is based on the name and namespace. Adds features for better security hardening and readability compared to writing a profile directly in the SELinux CIL language. Procedure Create a project by running the following command: USD oc new-project nginx-deploy Create a policy that can be used with a non-privileged workload by creating the following SelinuxProfile object: apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container Wait for selinuxd to install the policy by running the following command: USD oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure Example output selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met The policies are placed into an emptyDir in the container owned by the Security Profiles Operator. The policies are saved in Common Intermediate Language (CIL) format in /etc/selinux.d/<name>_<namespace>.cil . Access the pod by running the following command: USD oc -n openshift-security-profiles rsh -c selinuxd ds/spod Verification View the file contents with cat by running the following command: USD cat /etc/selinux.d/nginx-secure_nginx-deploy.cil Example output (block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) ) Verify that a policy has been installed by running the following command: USD semodule -l | grep nginx-secure Example output nginx-secure_nginx-deploy 7.7.2. Applying SELinux profiles to a pod Create a pod to apply one of the created profiles. For SELinux profiles, the namespace must be labelled to allow privileged workloads. Procedure Apply the scc.podSecurityLabelSync=false label to the nginx-deploy namespace by running the following command: USD oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false Apply the privileged label to the nginx-deploy namespace by running the following command: USD oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged Obtain the SELinux profile usage string by running the following command: USD oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}' Example output nginx-secure_nginx-deploy.process Apply the output string in the workload manifest in the .spec.containers[].securityContext.seLinuxOptions attribute: apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process Important The SELinux type must exist before creating the workload. 7.7.2.1. Applying SELinux log policies To log policy violations or AVC denials, set the SElinuxProfile profile to permissive . Important This procedure defines logging policies. It does not set enforcement policies. Procedure Add permissive: true to an SElinuxProfile : apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true 7.7.2.2. Binding workloads to profiles with ProfileBindings You can use the ProfileBinding resource to bind a security profile to the SecurityContext of a container. Procedure To bind a pod that uses a quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 image to the example SelinuxProfile profile, create a ProfileBinding object in the same namespace with the pod and the SelinuxProfile objects: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3 1 The kind: variable refers to the kind of the profile. 2 The name: variable refers to the name of the profile. 3 You can enable a default security profile by using a wildcard in the image attribute: image: "*" Important Using the image: "*" wildcard attribute binds all new pods with a default security profile in a given namespace. Label the namespace with enable-binding=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-binding=true Define a pod named test-pod.yaml : apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Create the pod: USD oc create -f test-pod.yaml Note If the pod already exists, you must re-create the pod for the binding to work properly. Verification Confirm the pod inherits the ProfileBinding by running the following command: USD oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}' Example output profile_nginx-binding.process 7.7.2.3. Replicating controllers and SecurityContextConstraints When you deploy SELinux policies for replicating controllers, such as deployments or daemon sets, note that the Pod objects spawned by the controllers are not running with the identity of the user who creates the workload. Unless a ServiceAccount is selected, the pods might revert to using a restricted SecurityContextConstraints (SCC) which does not allow use of custom security policies. Procedure Create a project by running the following command: USD oc new-project nginx-secure Create the following RoleBinding object to allow SELinux policies to be used in the nginx-secure namespace: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io Create the Role object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use Create the ServiceAccount object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure Create the Deployment object: apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 1 The .seLinuxOptions.type must exist before the Deployment is created. Note The SELinux type is not specified in the workload and is handled by the SCC. When the pods are created by the deployment and the ReplicaSet , the pods will run with the appropriate profile. Ensure that your SCC is usable by only the correct service account. Refer to Additional resources for more information. 7.7.3. Recording profiles from workloads The Security Profiles Operator can record system calls with ProfileRecording objects, making it easier to create baseline profiles for applications. When using the log enricher for recording SELinux profiles, verify the log enricher feature is enabled. See Additional resources for more information. Note A container with privileged: true security context restraints prevents log-based recording. Privileged containers are not subject to SELinux policies, and log-based recording makes use of a special SELinux profile to record events. Procedure Create a project by running the following command: USD oc new-project my-namespace Label the namespace with enable-recording=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-recording=true Create a ProfileRecording object containing a recorder: logs variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app Create a workload to record: apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 Confirm the pod is in a Running state by entering the following command: USD oc -n my-namespace get pods Example output NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s Confirm the enricher indicates that it receives audit logs for those containers: USD oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher Example output I0517 13:55:36.383187 348295 enricher.go:376] log-enricher "msg"="audit" "container"="redis" "namespace"="my-namespace" "node"="ip-10-0-189-53.us-east-2.compute.internal" "perm"="name_bind" "pod"="my-pod" "profile"="test-recording_redis_6kmrb_1684331729" "scontext"="system_u:system_r:selinuxrecording.process:s0:c4,c27" "tclass"="tcp_socket" "tcontext"="system_u:object_r:redis_port_t:s0" "timestamp"="1684331735.105:273965" "type"="selinux" Verification Remove the pod: USD oc -n my-namepace delete pod my-pod Confirm the Security Profiles Operator reconciles the two SELinux profiles: USD oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for selinuxprofile NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed 7.7.3.1. Merging per-container profile instances By default, each container instance records into a separate profile. The Security Profiles Operator can merge the per-container profiles into a single profile. Merging profiles is useful when deploying applications using ReplicaSet or Deployment objects. Procedure Edit a ProfileRecording object to include a mergeStrategy: containers variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record Label the namespace by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true Create the workload with the following YAML: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 To record the individual profiles, delete the deployment by running the following command: USD oc delete deployment nginx-deploy -n my-namespace To merge the profiles, delete the profile recording by running the following command: USD oc delete profilerecording test-recording -n my-namespace To start the merge operation and generate the results profile, run the following command: USD oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for selinuxprofiles NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed To view the permissions used by any of the containers, run the following command: USD oc get selinuxprofiles test-recording-nginx-record -o yaml 7.7.3.2. About seLinuxContext: RunAsAny Recording of SELinux policies is implemented with a webhook that injects a special SELinux type to the pods being recorded. The SELinux type makes the pod run in permissive mode, logging all the AVC denials into audit.log . By default, a workload is not allowed to run with a custom SELinux policy, but uses an auto-generated type. To record a workload, the workload must use a service account that has permissions to use an SCC that allows the webhook to inject the permissive SELinux type. The privileged SCC contains seLinuxContext: RunAsAny . In addition, the namespace must be labeled with pod-security.kubernetes.io/enforce: privileged if your cluster enables the Pod Security Admission because only the privileged Pod Security Standard allows using a custom SELinux policy. Additional resources Managing security context constraints Managing SCCs in OpenShift Using the log enricher About security profiles 7.8. Advanced Security Profiles Operator tasks Use advanced tasks to enable metrics, configure webhooks, or restrict syscalls. 7.8.1. Restrict the allowed syscalls in seccomp profiles The Security Profiles Operator does not restrict syscalls in seccomp profiles by default. You can define the list of allowed syscalls in the spod configuration. Procedure To define the list of allowedSyscalls , adjust the spec parameter by running the following command: USD oc -n openshift-security-profiles patch spod spod --type merge \ -p '{"spec":{"allowedSyscalls": ["exit", "exit_group", "futex", "nanosleep"]}}' Important The Operator will install only the seccomp profiles, which have a subset of syscalls defined into the allowed list. All profiles not complying with this ruleset are rejected. When the list of allowed syscalls is modified in the spod configuration, the Operator will identify the already installed profiles which are non-compliant and remove them automatically. 7.8.2. Base syscalls for a container runtime You can use the baseProfileName attribute to establish the minimum required syscalls for a given runtime to start a container. Procedure Edit the SeccompProfile kind object and add baseProfileName: runc-v1.0.0 to the spec field: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group 7.8.3. Enabling memory optimization in the spod daemon The controller running inside of spod daemon process watches all pods available in the cluster when profile recording is enabled. This can lead to very high memory usage in large clusters, resulting in the spod daemon running out of memory or crashing. To prevent crashes, the spod daemon can be configured to only load the pods labeled for profile recording into the cache memory. + Note SPO memory optimization is not enabled by default. Procedure Enable memory optimization by running the following command: USD oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"enableMemoryOptimization":true}}' To record a security profile for a pod, the pod must be labeled with spo.x-k8s.io/enable-recording: "true" : apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: "true" 7.8.4. Customizing daemon resource requirements The default resource requirements of the daemon container can be adjusted by using the field daemonResourceRequirements from the spod configuration. Procedure To specify the memory and cpu requests and limits of the daemon container, run the following command: USD oc -n openshift-security-profiles patch spod spod --type merge -p \ '{"spec":{"daemonResourceRequirements": { \ "requests": {"memory": "256Mi", "cpu": "250m"}, \ "limits": {"memory": "512Mi", "cpu": "500m"}}}}' 7.8.5. Setting a custom priority class name for the spod daemon pod The default priority class name of the spod daemon pod is set to system-node-critical . A custom priority class name can be configured in the spod configuration by setting a value in the priorityClassName field. Procedure Configure the priority class name by running the following command: USD oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"priorityClassName":"my-priority-class"}}' Example output securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched 7.8.6. Using metrics The openshift-security-profiles namespace provides metrics endpoints, which are secured by the kube-rbac-proxy container. All metrics are exposed by the metrics service within the openshift-security-profiles namespace. The Security Profiles Operator includes a cluster role and corresponding binding spo-metrics-client to retrieve the metrics from within the cluster. There are two metrics paths available: metrics.openshift-security-profiles/metrics : for controller runtime metrics metrics.openshift-security-profiles/metrics-spod : for the Operator daemon metrics Procedure To view the status of the metrics service, run the following command: USD oc get svc/metrics -n openshift-security-profiles Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s To retrieve the metrics, query the service endpoint using the default ServiceAccount token in the openshift-security-profiles namespace by running the following command: USD oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest \ -n openshift-security-profiles metrics-test -- bash -c \ 'curl -ks -H "Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://metrics.openshift-security-profiles/metrics-spod' Example output # HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. # TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation="delete"} 1 security_profiles_operator_seccomp_profile_total{operation="update"} 2 To retrieve metrics from a different namespace, link the ServiceAccount to the spo-metrics-client ClusterRoleBinding by running the following command: USD oc get clusterrolebinding spo-metrics-client -o wide Example output NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default 7.8.6.1. controller-runtime metrics The controller-runtime metrics and the DaemonSet endpoint metrics-spod provide a set of default metrics. Additional metrics are provided by the daemon, which are always prefixed with security_profiles_operator_ . Table 7.1. Available controller-runtime metrics Metric key Possible labels Type Purpose seccomp_profile_total operation={delete,update} Counter Amount of seccomp profile operations. seccomp_profile_audit_total node , namespace , pod , container , executable , syscall Counter Amount of seccomp profile audit operations. Requires the log enricher to be enabled. seccomp_profile_bpf_total node , mount_namespace , profile Counter Amount of seccomp profile bpf operations. Requires the bpf recorder to be enabled. seccomp_profile_error_total reason={ SeccompNotSupportedOnNode, InvalidSeccompProfile, CannotSaveSeccompProfile, CannotRemoveSeccompProfile, CannotUpdateSeccompProfile, CannotUpdateNodeStatus } Counter Amount of seccomp profile errors. selinux_profile_total operation={delete,update} Counter Amount of SELinux profile operations. selinux_profile_audit_total node , namespace , pod , container , executable , scontext , tcontext Counter Amount of SELinux profile audit operations. Requires the log enricher to be enabled. selinux_profile_error_total reason={ CannotSaveSelinuxPolicy, CannotUpdatePolicyStatus, CannotRemoveSelinuxPolicy, CannotContactSelinuxd, CannotWritePolicyFile, CannotGetPolicyStatus } Counter Amount of SELinux profile errors. 7.8.7. Using the log enricher The Security Profiles Operator contains a log enrichment feature, which is disabled by default. The log enricher container runs with privileged permissions to read the audit logs from the local node. The log enricher runs within the host PID namespace, hostPID . Important The log enricher must have permissions to read the host processes. Procedure Patch the spod configuration to enable the log enricher by running the following command: USD oc -n openshift-security-profiles patch spod spod \ --type=merge -p '{"spec":{"enableLogEnricher":true}}' Example output securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched Note The Security Profiles Operator will re-deploy the spod daemon set automatically. View the audit logs by running the following command: USD oc -n openshift-security-profiles logs -f ds/spod log-enricher Example output I0623 12:51:04.257814 1854764 deleg.go:130] setup "msg"="starting component: log-enricher" "buildDate"="1980-01-01T00:00:00Z" "compiler"="gc" "gitCommit"="unknown" "gitTreeState"="clean" "goVersion"="go1.16.2" "platform"="linux/amd64" "version"="0.4.0-dev" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher "msg"="Starting log-enricher on node: 127.0.0.1" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher "msg"="Connecting to local GRPC server" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher "msg"="Reading from file /var/log/audit/audit.log" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2} 7.8.7.1. Using the log enricher to trace an application You can use the Security Profiles Operator log enricher to trace an application. Procedure To trace an application, create a SeccompProfile logging profile: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG Create a pod object to use the profile: apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Examine the log enricher output by running the following command: USD oc -n openshift-security-profiles logs -f ds/spod log-enricher Example 7.1. Example output ... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=3 "syscallName"="close" "timestamp"="1624453150.205:1061" "type"="seccomp" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=157 "syscallName"="prctl" "timestamp"="1624453150.205:1062" "type"="seccomp" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=157 "syscallName"="prctl" "timestamp"="1624453150.205:1063" "type"="seccomp" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=12 "syscallName"="brk" "timestamp"="1624453150.235:2873" "type"="seccomp" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=21 "syscallName"="access" "timestamp"="1624453150.235:2874" "type"="seccomp" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=257 "syscallName"="openat" "timestamp"="1624453150.235:2875" "type"="seccomp" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=5 "syscallName"="fstat" "timestamp"="1624453150.235:2876" "type"="seccomp" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=9 "syscallName"="mmap" "timestamp"="1624453150.235:2877" "type"="seccomp" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=3 "syscallName"="close" "timestamp"="1624453150.235:2878" "type"="seccomp" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=257 "syscallName"="openat" "timestamp"="1624453150.235:2879" "type"="seccomp" ... 7.8.8. Configuring webhooks Profile binding and profile recording objects can use webhooks. Profile binding and recording object configurations are MutatingWebhookConfiguration CRs, managed by the Security Profiles Operator. To change the webhook configuration, the spod CR exposes a webhookOptions field that allows modification of the failurePolicy , namespaceSelector , and objectSelector variables. This allows you to set the webhooks to "soft-fail" or restrict them to a subset of a namespaces so that even if the webhooks failed, other namespaces or resources are not affected. Procedure Set the recording.spo.io webhook configuration to record only pods labeled with spo-record=true by creating the following patch file: spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - "true" Patch the spod/spod instance by running the following command: USD oc -n openshift-security-profiles patch spod \ spod -p USD(cat /tmp/spod-wh.patch) --type=merge To view the resulting MutatingWebhookConfiguration object, run the following command: USD oc get MutatingWebhookConfiguration \ spo-mutating-webhook-configuration -oyaml 7.9. Troubleshooting the Security Profiles Operator Troubleshoot the Security Profiles Operator to diagnose a problem or provide information in a bug report. 7.9.1. Inspecting seccomp profiles Corrupted seccomp profiles can disrupt your workloads. Ensure that the user cannot abuse the system by not allowing other workloads to map any part of the path /var/lib/kubelet/seccomp/operator . Procedure Confirm that the profile is reconciled by running the following command: USD oc -n openshift-security-profiles logs openshift-security-profiles-<id> Example 7.2. Example output I1019 19:34:14.942464 1 main.go:90] setup "msg"="starting openshift-security-profiles" "buildDate"="2020-10-19T19:31:24Z" "compiler"="gc" "gitCommit"="a3ef0e1ea6405092268c18f240b62015c247dd9d" "gitTreeState"="dirty" "goVersion"="go1.15.1" "platform"="linux/amd64" "version"="0.2.0-dev" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"=":8080" I1019 19:34:15.349076 1 main.go:126] setup "msg"="starting manager" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics" I1019 19:34:15.350201 1 controller.go:142] controller "msg"="Starting EventSource" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"defaultAction":""}}} I1019 19:34:15.450674 1 controller.go:149] controller "msg"="Starting Controller" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" I1019 19:34:15.450757 1 controller.go:176] controller "msg"="Starting workers" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" "worker count"=1 I1019 19:34:15.453102 1 profile.go:148] profile "msg"="Reconciled profile from SeccompProfile" "namespace"="openshift-security-profiles" "profile"="nginx-1.19.1" "name"="nginx-1.19.1" "resource version"="728" I1019 19:34:15.453618 1 profile.go:148] profile "msg"="Reconciled profile from SeccompProfile" "namespace"="openshift-security-profiles" "profile"="openshift-security-profiles" "name"="openshift-security-profiles" "resource version"="729" Confirm that the seccomp profiles are saved into the correct path by running the following command: USD oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> \ -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload Example output profile-block.json profile-complain.json 7.10. Uninstalling the Security Profiles Operator You can remove the Security Profiles Operator from your cluster by using the OpenShift Container Platform web console. 7.10.1. Uninstall the Security Profiles Operator using the web console To remove the Security Profiles Operator, you must first delete the seccomp and SELinux profiles. After the profiles are removed, you can then remove the Operator and its namespace by deleting the openshift-security-profiles project. Prerequisites Access to an OpenShift Container Platform cluster that uses an account with cluster-admin permissions. The Security Profiles Operator is installed. Procedure To remove the Security Profiles Operator by using the OpenShift Container Platform web console: Navigate to the Operators Installed Operators page. Delete all seccomp profiles, SELinux profiles, and webhook configurations. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Security Profiles Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for security profiles . Click the Options menu to the openshift-security-profiles project, and select Delete Project . Confirm the deletion by typing openshift-security-profiles in the dialog box, and click Delete . Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-security-profiles", "oc get deploy -n openshift-security-profiles", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"verbosity\":1}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc new-project my-namespace", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc -n my-namespace get seccompprofile profile1 --output wide", "NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json", "oc get sp profile1 --output=jsonpath='{.status.localhostProfile}'", "operator/my-namespace/profile1.json", "spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json", "oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge", "deployment.apps/myapp patched", "oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq .", "{ \"seccompProfile\": { \"localhostProfile\": \"operator/my-namespace/profile1.json\", \"type\": \"localhost\" } }", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}'", "{\"localhostProfile\":\"operator/my-namespace/profile.json\",\"type\":\"Localhost\"}", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0523 14:19:08.747313 430694 enricher.go:445] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"executable\"=\"/usr/local/bin/redis-server\" \"namespace\"=\"my-namespace\" \"node\"=\"xiyuan-23-5g2q9-worker-eastus2-6rpgf\" \"pid\"=656802 \"pod\"=\"my-pod\" \"syscallID\"=0 \"syscallName\"=\"read\" \"timestamp\"=\"1684851548.745:207179\" \"type\"=\"seccomp\"", "oc -n my-namepace delete pod my-pod", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx-record Installed 55s", "oc get seccompprofiles test-recording-nginx-record -o yaml", "oc new-project nginx-deploy", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container", "oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure", "selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met", "oc -n openshift-security-profiles rsh -c selinuxd ds/spod", "cat /etc/selinux.d/nginx-secure_nginx-deploy.cil", "(block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) )", "semodule -l | grep nginx-secure", "nginx-secure_nginx-deploy", "oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false", "oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged", "oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}'", "nginx-secure_nginx-deploy.process", "apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}'", "profile_nginx-binding.process", "oc new-project nginx-secure", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use", "apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure", "apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0517 13:55:36.383187 348295 enricher.go:376] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"namespace\"=\"my-namespace\" \"node\"=\"ip-10-0-189-53.us-east-2.compute.internal\" \"perm\"=\"name_bind\" \"pod\"=\"my-pod\" \"profile\"=\"test-recording_redis_6kmrb_1684331729\" \"scontext\"=\"system_u:system_r:selinuxrecording.process:s0:c4,c27\" \"tclass\"=\"tcp_socket\" \"tcontext\"=\"system_u:object_r:redis_port_t:s0\" \"timestamp\"=\"1684331735.105:273965\" \"type\"=\"selinux\"", "oc -n my-namepace delete pod my-pod", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed", "oc get selinuxprofiles test-recording-nginx-record -o yaml", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"allowedSyscalls\": [\"exit\", \"exit_group\", \"futex\", \"nanosleep\"]}}'", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableMemoryOptimization\":true}}'", "apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: \"true\"", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"daemonResourceRequirements\": { \"requests\": {\"memory\": \"256Mi\", \"cpu\": \"250m\"}, \"limits\": {\"memory\": \"512Mi\", \"cpu\": \"500m\"}}}}'", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"priorityClassName\":\"my-priority-class\"}}'", "securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched", "oc get svc/metrics -n openshift-security-profiles", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s", "oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest -n openshift-security-profiles metrics-test -- bash -c 'curl -ks -H \"Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://metrics.openshift-security-profiles/metrics-spod'", "HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation=\"delete\"} 1 security_profiles_operator_seccomp_profile_total{operation=\"update\"} 2", "oc get clusterrolebinding spo-metrics-client -o wide", "NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableLogEnricher\":true}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "I0623 12:51:04.257814 1854764 deleg.go:130] setup \"msg\"=\"starting component: log-enricher\" \"buildDate\"=\"1980-01-01T00:00:00Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"unknown\" \"gitTreeState\"=\"clean\" \"goVersion\"=\"go1.16.2\" \"platform\"=\"linux/amd64\" \"version\"=\"0.4.0-dev\" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher \"msg\"=\"Starting log-enricher on node: 127.0.0.1\" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher \"msg\"=\"Connecting to local GRPC server\" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher \"msg\"=\"Reading from file /var/log/audit/audit.log\" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2}", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.205:1061\" \"type\"=\"seccomp\" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1062\" \"type\"=\"seccomp\" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1063\" \"type\"=\"seccomp\" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=12 \"syscallName\"=\"brk\" \"timestamp\"=\"1624453150.235:2873\" \"type\"=\"seccomp\" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=21 \"syscallName\"=\"access\" \"timestamp\"=\"1624453150.235:2874\" \"type\"=\"seccomp\" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2875\" \"type\"=\"seccomp\" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=5 \"syscallName\"=\"fstat\" \"timestamp\"=\"1624453150.235:2876\" \"type\"=\"seccomp\" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=9 \"syscallName\"=\"mmap\" \"timestamp\"=\"1624453150.235:2877\" \"type\"=\"seccomp\" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.235:2878\" \"type\"=\"seccomp\" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2879\" \"type\"=\"seccomp\" ...", "spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - \"true\"", "oc -n openshift-security-profiles patch spod spod -p USD(cat /tmp/spod-wh.patch) --type=merge", "oc get MutatingWebhookConfiguration spo-mutating-webhook-configuration -oyaml", "oc -n openshift-security-profiles logs openshift-security-profiles-<id>", "I1019 19:34:14.942464 1 main.go:90] setup \"msg\"=\"starting openshift-security-profiles\" \"buildDate\"=\"2020-10-19T19:31:24Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"a3ef0e1ea6405092268c18f240b62015c247dd9d\" \"gitTreeState\"=\"dirty\" \"goVersion\"=\"go1.15.1\" \"platform\"=\"linux/amd64\" \"version\"=\"0.2.0-dev\" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\" \"addr\"=\":8080\" I1019 19:34:15.349076 1 main.go:126] setup \"msg\"=\"starting manager\" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager \"msg\"=\"starting metrics server\" \"path\"=\"/metrics\" I1019 19:34:15.350201 1 controller.go:142] controller \"msg\"=\"Starting EventSource\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"defaultAction\":\"\"}}} I1019 19:34:15.450674 1 controller.go:149] controller \"msg\"=\"Starting Controller\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" I1019 19:34:15.450757 1 controller.go:176] controller \"msg\"=\"Starting workers\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"worker count\"=1 I1019 19:34:15.453102 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"nginx-1.19.1\" \"name\"=\"nginx-1.19.1\" \"resource version\"=\"728\" I1019 19:34:15.453618 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"openshift-security-profiles\" \"name\"=\"openshift-security-profiles\" \"resource version\"=\"729\"", "oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload", "profile-block.json profile-complain.json", "oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/security-profiles-operator
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/ipv6_networking_for_the_overcloud/making-open-source-more-inclusive
5.4.16.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume
5.4.16.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume You can convert an existing RAID1 LVM logical volume to an LVM linear logical volume with the lvconvert command by specifying the -m0 argument. This removes all the RAID data subvolumes and all the RAID metadata subvolumes that make up the RAID array, leaving the top-level RAID1 image as the linear logical volume. The following example displays an existing LVM RAID1 logical volume. The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device. When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of two images: /dev/sda1 and /dev/sdb1 . In this example, the lvconvert command specifies that you want to remove /dev/sda1 , leaving /dev/sdb1 as the physical volume that makes up the linear device.
[ "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m0 my_vg/my_lv /dev/sda1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdb1(1)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/convert-RAID1-to-linear
Chapter 19. Managing Guests with the Virtual Machine Manager (virt-manager)
Chapter 19. Managing Guests with the Virtual Machine Manager (virt-manager) This chapter describes the Virtual Machine Manager ( virt-manager ) windows, dialog boxes, and various GUI controls. virt-manager provides a graphical view of hypervisors and guests on your host system and on remote host systems. virt-manager can perform virtualization management tasks, including: defining and creating guests, assigning memory, assigning virtual CPUs, monitoring operational performance, saving and restoring, pausing and resuming, and shutting down and starting guests, links to the textual and graphical consoles, and live and offline migrations. Important It is important to note which user you are using. If you create a guest virtual machine with one user, you will not be able to retrieve information about it using another user. This is especially important when you create a virtual machine in virt-manager. The default user is root in that case unless otherwise specified. Should you have a case where you cannot list the virtual machine using the virsh list --all command, it is most likely due to you running the command using a different user than you used to create the virtual machine. 19.1. Starting virt-manager To start virt-manager session open the Applications menu, then the System Tools menu and select Virtual Machine Manager ( virt-manager ). The virt-manager main window appears. Figure 19.1. Starting virt-manager Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command: Using ssh to manage virtual machines and hosts is discussed further in Section 18.2, "Remote Management with SSH" .
[ "ssh -X host's address virt-manager" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager
Authentication and authorization
Authentication and authorization OpenShift Container Platform 4.13 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
[ "oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1", "oc apply -f </path/to/file.yaml>", "oc describe oauth.config.openshift.io/cluster", "Spec: Token Config: Access Token Max Age Seconds: 172800", "oc edit oauth cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1", "oc get clusteroperators authentication", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 145m", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.13.0 True False False 145m", "error: You must be logged in to the server (Unauthorized)", "oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }", "oc get events | grep ServiceAccount", "1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "oc describe sa/proxy | grep -A5 Events", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens", "oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host", "oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')", "oc edit oauthclient <oauth_client> 1", "apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1", "oc get useroauthaccesstokens", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc get useroauthaccesstokens --field-selector=clientName=\"console\"", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc describe useroauthaccesstokens <token_name>", "Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>", "oc delete useroauthaccesstokens <token_name>", "useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted", "oc delete secrets kubeadmin -n kube-system", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc create user <username>", "oc create identity <identity_provider>:<identity_provider_user_id>", "oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>", "htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>", "htpasswd -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>", "> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>", "> htpasswd.exe -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>", "oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd", "htpasswd -bB users.htpasswd <username> <password>", "Adding password for user <username>", "htpasswd -D users.htpasswd <username>", "Deleting password for user <username>", "oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "oc delete user <username>", "user.user.openshift.io \"<username>\" deleted", "oc delete identity my_htpasswd_provider:<username>", "identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "ldap://host:port/basedn?attribute?scope?filter", "(&(<filter>)(<attribute>=<username>))", "ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)", "oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "{\"error\":\"Error message\"}", "{\"sub\":\"userid\"} 1", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0", "curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp", "{\"sub\":\"userid\"}", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User", "identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User", "curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request", "curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request", "curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token", "curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>", "kdestroy -c cache_name 1", "oc login -u <username>", "oc logout", "kinit", "oc login", "https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc login -u <identity_provider_username> --server=<api_server_url_and_port>", "oc whoami", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers", "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>", "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "oc edit authentications cluster", "spec: serviceAccountIssuer: https://test.default.svc 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4", "oc create -f pod-projected-svc-token.yaml", "oc create token build-robot", "eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ", "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc-admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc edit scc <scc_name>", "oc delete scc <scc_name>", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true", "oc adm must-gather -- /usr/bin/gather_audit_logs", "zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c", "1 test-namespace my-pod", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>", "url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5", "baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6", "groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 bindDN: cn=admin,dc=example,dc=com bindPassword: file: \"/etc/secrets/bindPassword\" rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4", "oc adm groups sync --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --sync-config=config.yaml --confirm", "oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc new-project ldap-sync 1", "kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync", "oc create -f ldap-sync-service-account.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - user.openshift.io resources: - groups verbs: - get - list - create - update", "oc create -f ldap-sync-cluster-role.yaml", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2", "oc create -f ldap-sync-cluster-role-binding.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc create -f ldap-sync-config-map.yaml", "kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"", "oc create -f ldap-sync-cron-job.yaml", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=rfc2307_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3", "oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins", "oc adm groups sync --sync-config=active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5", "oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>", "cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r", "mycluster-2mpcn", "azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> --region=<aws_region> --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_30_cluster-api_00_credentials-request.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 7", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system aws-creds", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode", "[default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3", "{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_cluster-api_00_credentials-request.yaml 2 0000_30_machine-api-operator_00_credentials-request.yaml 3 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 4 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 5 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 6 0000_50_cluster-network-operator_02-cncc-credentials.yaml 7 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 8", "ccoctl gcp create-all --name=<name> --region=<gcp_region> --project=<gcp_project_id> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system gcp-credentials", "Error from server (NotFound): secrets \"gcp-credentials\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data.\"service_account.json\"' | base64 -d", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken\", 2 \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/authentication_and_authorization/index
Chapter 18. complete.adoc
Chapter 18. complete.adoc This chapter describes the commands under the complete.adoc command. 18.1. complete print bash completion command Usage: Table 18.1. Optional Arguments Value Summary -h, --help Show this help message and exit --name <command_name> Command name to support with command completion --shell <shell> Shell being used. use none for data only (default: bash)
[ "openstack complete [-h] [--name <command_name>] [--shell <shell>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/complete_adoc
8.39. fcoe-target-utils
8.39. fcoe-target-utils 8.39.1. RHBA-2013:1683 - fcoe-target-utils bug fix update Updated fcoe-target-utils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The fcoe-target-utils packages contain a command-line interface for configuring FCoE LUNs (Fibre Channel over Ethernet Logical Unit Numbers) and backstores. Bug Fixes BZ# 854708 Due to an error leaving a device marked as in-use, attempts to map a block backstore that had been previously mapped would fail. With this update, mappings of block backstores are properly released, and remapping a block device now succeeds. BZ# 880542 Prior to this update, the kernel terminated unexpectedly when the fcoe-target daemon stopped. A patch has been provided to fix this bug, and the kernel now no longer crashes. BZ# 882121 Previously, the target reported support for sequence-level error recovery erroneously. Consequently, interrupting the connection between the FCoE target and a bnx2fc initiator could cause the initiator to erroneously perform sequence-level error recovery instead of exchange-level error, leading to a failure of all devices attached to the target. This bug has been fixed, and connections with a bnx2fc initiator may now be interrupted without disrupting other devices. BZ# 912210 Prior to this update, there was an error in the python-rtslib library. Consequently, when creating a pscsi (SCSI pass-through) storage object in the targetcli utility, the python-rtslib returned a traceback. The error in the library has been fixed, and pscsi storage objects are now created without errors. BZ# 999902 Since the fcoe-utils command-line interface is required by the fcoe-target-utils packages and is not supported on the s390x architecture, fcoe-target-utils will not work properly on s390x, and thus has been removed. Users of fcoe-target-utils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/fcoe-target-utils
Chapter 7. Known issues
Chapter 7. Known issues This section describes known issues in AMQ Broker 7.9. ENTMQBR-5749 - Remove unsupported operators that are visible in OperatorHub Only the Operators and Operator channels mentioned in Deploying the Operator from OperatorHub are supported. For technical reasons associated with Operator publication, other Operator and channels are visible in the OperatorHub and should be ignored. For reference, the following list shows which Operators are visble, but not supported: Red Hat Integration - AMQ Broker LTS - all channels Red Hat Integration - AMQ Broker - alpha, current, and current-76 ENTMQBR-5615 - Unexpected breaking change in artemis.profile prevents "init container image" approach If you use the JVM option -Dhawtio.role to set user roles as part of the USDJAVA_ARGS section of the artemis_profile file, users might not be able to access the broker console. This issue is caused by a new property HAWTIO_ROLE which overrides any values set by -Dhawtio.role . To workaround this problem, set the appropriate roles using the HAWTIO_ROLE property in the etc/artemis.profile file. ENTMQBR-17 - AMQ222117: Unable to start cluster connection A broker cluster may fail to initialize properly in environments that support IPv6. The failure is due to a SocketException that is indicated by the log message Can't assign requested address . To work around this issue, set the java.net.preferIPv4Stack system property to true . ENTMQBR-520 - Receiving from address named the same as a queue bound to another address should not be allowed A queue with the same name as an address must only be assigned to address. Creating a queue with the same name as an existing address, but bound to an address with a different name, is an invalid configuration. Doing so can result in incorrect messages being routed to the queue. ENTMQBR-569 - Conversion of IDs from OpenWire to AMQP results in sending IDs as binary When communicating cross-protocol from an A-MQ 6 OpenWire client to an AMQP client, additional information is encoded in the application message properties. This is benign information used internally by the broker and can be ignored. ENTMQBR-599 - Define truststore and keystore by Artemis cli Creating a broker instance by using the --ssl-key , --ssl-key-password , --ssl-trust , and --ssl-trust-password parameters does not work. To work around this issue, set the corresponding properties manually in bootstrap.xml after creating the broker. ENTMQBR-636 - Journal breaks, causing JavaNullPointerException , under perf load (mpt) To prevent IO-related issues from occurring when the broker is managing heavy loads, verify that the JVM is allocated with enough memory and heap space. See the section titled "Tuning the VM" in the Performance Tuning chapter of the ActiveMQ Artemis documentation. ENTMQBR-648 - JMS Openwire client is unable to send messages to queue with defined purgeOnNoConsumer or queue filter Using an A-MQ 6 JMS client to send messages to an address that has a queue with purgeOnNoConsumer set to true fails if the queue has no consumers. It is recommended that you do not set the purgeOnNoConsumer option when using A-MQ 6 JMS clients. ENTMQBR-652 - List of known amq-jon-plugin bugs This version of amq-jon-plugin has known issues with the MBeans for broker and queue. Issues with the broker MBean: Closing a connection throws java.net.SocketTimeoutException exception listSessions() throws java.lang.ClassCastException Adding address settings throws java.lang.IllegalArgumentException getConnectorServices() operation cannot be found listConsumersAsJSON() operation cannot be found getDivertNames() operation cannot be found Listing network topology throws IllegalArgumentException Remove address settings has wrong parameter name Issues with the queue MBean: expireMessage() throws argument type mismatch exception listDeliveringMessages() throws IllegalArgumentException listMessages() throws java.lang.Exception moveMessages() throws IllegalArgumentException with error message argument type mismatch removeMessage() throws IllegalArgumentException with error message argument type mismatch removeMessages() throws exception with error Can't find operation removeMessage with 2 arguments retryMessage() throws argument type mismatch IllegalArgumentException ENTMQBR-655 - [AMQP] Unable to send message when populate-validated-user is enabled The configuration option populate-validated-user is not supported for messages produced using the AMQP protocol. ENTMQBR-897 - Openwire client/protocol issues with special characters in destination name Currently AMQ OpenWire JMS clients cannot access queues and addresses that include the following characters in their name: comma (','), hash ('#'), greater than ('>'), and whitespace. ENTMQBR-944 - [A-MQ7, Hawtio, RBAC] User gets no feedback if operation access was denied by RBAC The console can indicate that an operation attempted by an unauthorized user was successful when it was not. ENTMQBR-1875 - [AMQ 7, ha, replicated store] backup broker appear not to go "live" or shutdown after - ActiveMQIllegalStateException errorType=ILLEGAL_STATE message=AMQ119026: Backup Server was not yet in sync with live Removing the paging disk of a master broker while a backup broker is trying to sync with the master broker causes the master to fail. In addition, the backup broker cannot become live because it continues trying to sync with the master. ENTMQBR-2068 - some messages received but not delivered during HA fail-over, fail-back scenario Currently, if a broker fails over to its slave while an OpenWire client is sending messages, messages being delivered to the broker when failover occurs could be lost. To work around this issue, ensure that the broker persists the messages before acknowledging them. ENTMQBR-2928 - Broker Operator unable to recover from CR changes causing erroneous state If the AMQ Broker Operator encounters an error when applying a Custom Resource (CR) update, the Operator does not recover. Specifically, the Operator stops responding as expected to further updates to your CRs. For example, say that a misspelling in the value of the image attribute in your main broker CR causes broker Pods to fail to deploy, with an associated error message of ImagePullBackOff . If you then fix the misspelling and apply the CR changes, the Operator does not deploy the specified number of broker Pods. In addition, the Operator does not respond to any further CR changes. To work around this issue, you must delete the CRs that you originally deployed, before redeploying them. To delete an existing CR, use a command such as oc delete -f <CR name> . ENTMQBR-3846 - MQTT client does not reconnect on broker restart When you restart a broker, or a broker fails over, the active broker does not restore connections for previously-connected MQTT clients. To work around this issue, to reconnect an MQTT client, you need to manually call the subscribe() method on the client. ENTMQBR-4023 - AMQ Broker Operator: Pod Status pod names do not reflect the reality For an Operator-based broker deployment in a given OpenShift project, if you use the oc get pod command to list the broker Pods, the ordinal values for the Pods start at 0 , for example, amq-operator-test-broker-ss-0 . However, if you use the oc describe command to get the status of broker Pods created from the activemqartmises Custom Resource (that is, oc describe activemqartemises ), the Pod ordinal values incorrectly start at 1 , for example, amq-operator-test-broker-ss-1 . There is no way to work around this issue. ENTMQBR-4127 - AMQ Broker Operator: Route name generated by Operator might be too long for OpenShift For each broker Pod in an Operator-based deployment, the default name of the Route that the Operator creates for access to the AMQ Broker management console includes the name of the Custom Resource (CR) instance, the name of the OpenShift project, and the name of the OpenShift cluster. For example, my-broker-deployment-wconsj-0-svc-rte-my-openshift-project.my-openshift-domain . If some of these names are long, the default Route name might exceed the limit of 63 characters that OpenShift enforces. In this case, in the OpenShift Container Platform web console, the Route shows a status of Rejected . To work around this issue, use the OpenShift Container Platform web console to manually edit the name of the Route. In the console, click the Route. On the Actions drop-down menu in the top-right corner, select Edit Route . In the YAML editor, find the spec.host property and edit the value. ENTMQBR-4140 - AMQ Broker Operator: Installation becomes unusable if storage.size is improperly specified If you configure the storage.size property of a Custom Resource (CR) instance to specify the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the Operator installation becomes unusable if you do not specify this value properly. For example, suppose that you set the value of storage.size to 1 (that is, without specifying a unit). In this case, the Operator cannot use the CR to create a broker deployment. In addition, even if you remove the CR and deploy a new version with storage.size specified correctly, the Operator still cannot use this CR to create a deployment as expected. To work around this issue, first stop the Operator. In the OpenShift Container Platform web console, click Deployments . For the Pod that corresponds to the AMQ Broker Operator, click the More options menu (three vertical dots). Click Edit Pod Count and set the value to 0 . When the Operator Pod has stopped, create a new version of the CR with storage.size correctly specified. Then, to restart the Operator, click Edit Pod Count again and set the value back to 1 . ENTMQBR-4141 - AMQ Broker Operator: Increasing Persistent Volume size requires manual involvement even after recreating Stateful Set If you try to increase the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the change does not take effect without further manual steps. For example, suppose that you configure the storage.size property of a Custom Resource (CR) instance to specify an initial size for the PVC. If you modify the CR to specify a different value of storage.size , the existing brokers continue to use the original PVC size. This is the case even if you scale the deployment down to zero brokers and then back up to the original number. However, if you scale the size of the deployment up to add additional brokers, the new brokers use the new PVC size. To work around this issue, and ensure that all brokers in the deployment use the same PVC size, use the OpenShift Container Platform web console to expand the PVC size used by the deployment. In the console, click Storage Persistent Volume Claims . Click your deployment. On the Actions drop-down menu in the top-right corner, select Expand PVC and enter a new value.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/known
probe::nfs.fop.aio_write
probe::nfs.fop.aio_write Name probe::nfs.fop.aio_write - NFS client aio_write file operation Synopsis nfs.fop.aio_write Values count read bytes parent_name parent dir name ino inode number file_name file name buf the address of buf in user space dev device identifier pos offset of the file
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-aio-write
Chapter 1. Compatibility Matrix for Red Hat Ceph Storage 4.0
Chapter 1. Compatibility Matrix for Red Hat Ceph Storage 4.0 The following tables list products and their versions compatible with Red Hat Ceph Storage 4.0. Host Operating System Version Notes Red Hat Enterprise Linux 7.7 and 8.1 Included in the product Important All nodes in the cluster and their clients must use the supported OS version(s) to ensure that the version of the ceph package is the same on all nodes. Using different versions of the ceph package is not supported. Important Red Hat no longer supports using Ubuntu as a host operating system to deploy Red Hat Ceph Storage. Product Version Notes Ansible 2.8 Included in the product Red Hat OpenShift 3.0 and later 3.x versions The RBD driver is supported starting with version 3.0 and Cinder driver is supported starting with version 3.1. Red Hat OpenShift 4 is not supported. Red Hat OpenStack Platform 13, 15 and 16 Red Hat OpenStack Platform 11, 12 and 14 is not supported. Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 4 are not a tested combination. Director 13 deployed Red Hat Ceph Storage 3.1+ also supports external Red Hat Ceph Storage 4. Director 16 deployed Red Hat Ceph Storage 4.0 also supports external Red Hat Ceph Storage 5. Red Hat Satellite 6.x Only registering with the Content Delivery Network (CDN) is supported. Registering with Red Hat Network (RHN) is deprecated and not supported. Client Connector Version Notes S3A 2.8.x, 3.2.x, and trunk Red Hat Enterprise Linux iSCSI Initiator The latest versions of the iscsi-initiator-utils and device-mapper-multipath packages RHCS as a backup target Version Notes CommVault Cloud Data Management v11 IBM Spectrum Protect Plus 10.1.5 IBM Spectrum Protect server 8.1.8 NetApp AltaVault 4.3.2 and 4.4 Rubrik Cloud Data Management (CDM) 3.2 and later Trilio, TrilioVault 3.0 S3 target Veeam (object storage) Veeam Availability Suite 9.5 Update 4 Supported on Red Hat Ceph Storage object storage with the S3 protocol Veritas NetBackup for Symantec OpenStorage (OST) cloud backup 7.7 and 8.0 Independent Software vendors Version Notes IBM Spectrum Discover 2.0.3
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/compatibility_guide/compatibility-matrix-for-red-hat-ceph-storage-4-0
Chapter 9. GFS2 tracepoints and the glock debugfs interface
Chapter 9. GFS2 tracepoints and the glock debugfs interface This documentation on both the GFS2 tracepoints and the glock debugfs interface is intended for advanced users who are familiar with file system internals and who would like to learn more about the design of GFS2 and how to debug GFS2-specific issues. The following sections describe GFS2 tracepoints and the GFS2 glocks file. 9.1. GFS2 tracepoint types There are currently three types of GFS2 tracepoints: glock (pronounced "gee-lock") tracepoints, bmap tracepoints and log tracepoints. These can be used to monitor a running GFS2 file system. Tracepoints are particularly useful when a problem, such as a hang or performance issue, is reproducible and thus the tracepoint output can be obtained during the problematic operation. In GFS2, glocks are the primary cache control mechanism and they are the key to understanding the performance of the core of GFS2. The bmap (block map) tracepoints can be used to monitor block allocations and block mapping (lookup of already allocated blocks in the on-disk metadata tree) as they happen and check for any issues relating to locality of access. The log tracepoints keep track of the data being written to and released from the journal and can provide useful information about that part of GFS2. The tracepoints are designed to be as generic as possible. This should mean that it will not be necessary to change the API during the course of Red Hat Enterprise Linux 8. On the other hand, users of this interface should be aware that this is a debugging interface and not part of the normal Red Hat Enterprise Linux 8 API set, and as such Red Hat makes no guarantees that changes in the GFS2 tracepoints interface will not occur. Tracepoints are a generic feature of Red Hat Enterprise Linux and their scope goes well beyond GFS2. In particular they are used to implement the blktrace infrastructure and the blktrace tracepoints can be used in combination with those of GFS2 to gain a fuller picture of the system performance. Due to the level at which the tracepoints operate, they can produce large volumes of data in a very short period of time. They are designed to put a minimum load on the system when they are enabled, but it is inevitable that they will have some effect. Filtering events by a variety of means can help reduce the volume of data and help focus on obtaining just the information which is useful for understanding any particular situation. 9.2. Tracepoints The tracepoints can be found under the /sys/kernel/debug/tracing/ directory assuming that debugfs is mounted in the standard place at the /sys/kernel/debug directory. The events subdirectory contains all the tracing events that may be specified and, provided the gfs2 module is loaded, there will be a gfs2 subdirectory containing further subdirectories, one for each GFS2 event. The contents of the /sys/kernel/debug/tracing/events/gfs2 directory should look roughly like the following: To enable all the GFS2 tracepoints, enter the following command: To enable a specific tracepoint, there is an enable file in each of the individual event subdirectories. The same is true of the filter file which can be used to set an event filter for each event or set of events. The meaning of the individual events is explained in more detail below. The output from the tracepoints is available in ASCII or binary format. This appendix does not currently cover the binary interface. The ASCII interface is available in two ways. To list the current content of the ring buffer, you can enter the following command: This interface is useful in cases where you are using a long-running process for a certain period of time and, after some event, want to look back at the latest captured information in the buffer. An alternative interface, /sys/kernel/debug/tracing/trace_pipe , can be used when all the output is required. Events are read from this file as they occur; there is no historical information available through this interface. The format of the output is the same from both interfaces and is described for each of the GFS2 events in the later sections of this appendix. A utility called trace-cmd is available for reading tracepoint data. For more information about this utility, see http://lwn.net/Articles/341902/ . The trace-cmd utility can be used in a similar way to the strace utility, for example to run a command while gathering trace data from various sources. 9.3. Glocks To understand GFS2, the most important concept to understand, and the one which sets it aside from other file systems, is the concept of glocks. In terms of the source code, a glock is a data structure that brings together the DLM and caching into a single state machine. Each glock has a 1:1 relationship with a single DLM lock, and provides caching for that lock state so that repetitive operations carried out from a single node of the file system do not have to repeatedly call the DLM, and thus they help avoid unnecessary network traffic. There are two broad categories of glocks, those which cache metadata and those which do not. The inode glocks and the resource group glocks both cache metadata, other types of glocks do not cache metadata. The inode glock is also involved in the caching of data in addition to metadata and has the most complex logic of all glocks. Table 9.1. Glock Modes and DLM Lock Modes Glock mode DLM lock mode Notes UN IV/NL Unlocked (no DLM lock associated with glock or NL lock depending on I flag) SH PR Shared (protected read) lock EX EX Exclusive lock DF CW Deferred (concurrent write) used for Direct I/O and file system freeze Glocks remain in memory until either they are unlocked (at the request of another node or at the request of the VM) and there are no local users. At that point they are removed from the glock hash table and freed. When a glock is created, the DLM lock is not associated with the glock immediately. The DLM lock becomes associated with the glock upon the first request to the DLM, and if this request is successful then the 'I' (initial) flag will be set on the glock. The "Glock Flags" table in The glock debugfs interface shows the meanings of the different glock flags. Once the DLM has been associated with the glock, the DLM lock will always remain at least at NL (Null) lock mode until the glock is to be freed. A demotion of the DLM lock from NL to unlocked is always the last operation in the life of a glock. Each glock can have a number of "holders" associated with it, each of which represents one lock request from the higher layers. System calls relating to GFS2 queue and dequeue holders from the glock to protect the critical section of code. The glock state machine is based on a work queue. For performance reasons, tasklets would be preferable; however, in the current implementation we need to submit I/O from that context which prohibits their use. Note Workqueues have their own tracepoints which can be used in combination with the GFS2 tracepoints. The following table shows what state may be cached under each of the glock modes and whether that cached state may be dirty. This applies to both inode and resource group locks, although there is no data component for the resource group locks, only metadata. Table 9.2. Glock Modes and Data Types Glock mode Cache Data Cache Metadata Dirty Data Dirty Metadata UN No No No No SH Yes Yes No No DF No Yes No No EX Yes Yes Yes Yes 9.4. The glock debugfs interface The glock debugfs interface allows the visualization of the internal state of the glocks and the holders and it also includes some summary details of the objects being locked in some cases. Each line of the file either begins G: with no indentation (which refers to the glock itself) or it begins with a different letter, indented with a single space, and refers to the structures associated with the glock immediately above it in the file (H: is a holder, I: an inode, and R: a resource group). Here is an example of what the content of this file might look like: The above example is a series of excerpts (from an approximately 18MB file) generated by the command cat /sys/kernel/debug/gfs2/unity:myfs/glocks >my.lock during a run of the postmark benchmark on a single node GFS2 file system. The glocks in the figure have been selected in order to show some of the more interesting features of the glock dumps. The glock states are either EX (exclusive), DF (deferred), SH (shared) or UN (unlocked). These states correspond directly with DLM lock modes except for UN which may represent either the DLM null lock state, or that GFS2 does not hold a DLM lock (depending on the I flag as explained above). The s: field of the glock indicates the current state of the lock and the same field in the holder indicates the requested mode. If the lock is granted, the holder will have the H bit set in its flags (f: field). Otherwise, it will have the W wait bit set. The n: field (number) indicates the number associated with each item. For glocks, that is the type number followed by the glock number so that in the above example, the first glock is n:5/75320; which indicates an iopen glock which relates to inode 75320. In the case of inode and iopen glocks, the glock number is always identical to the inode's disk block number. Note The glock numbers (n: field) in the debugfs glocks file are in hexadecimal, whereas the tracepoints output lists them in decimal. This is for historical reasons; glock numbers were always written in hex, but decimal was chosen for the tracepoints so that the numbers could easily be compared with the other tracepoint output (from blktrace for example) and with output from stat (1). The full listing of all the flags for both the holder and the glock are set out in the "Glock Flags" table, below, and the "Glock Holder Flags" table in Glock holders . The content of lock value blocks is not currently available through the glock debugfs interface. The following table shows the meanings of the different glock types. Table 9.3. Glock Types Type number Lock type Use 1 trans Transaction lock 2 inode Inode metadata and data 3 rgrp Resource group metadata 4 meta The superblock 5 iopen Inode last closer detection 6 flock flock (2) syscall 8 quota Quota operations 9 journal Journal mutex One of the more important glock flags is the l (locked) flag. This is the bit lock that is used to arbitrate access to the glock state when a state change is to be performed. It is set when the state machine is about to send a remote lock request through the DLM, and only cleared when the complete operation has been performed. Sometimes this can mean that more than one lock request will have been sent, with various invalidations occurring between times. The following table shows the meanings of the different glock flags. Table 9.4. Glock Flags Flag Name Meaning d Pending demote A deferred (remote) demote request D Demote A demote request (local or remote) f Log flush The log needs to be committed before releasing this glock F Frozen Replies from remote nodes ignored - recovery is in progress. i Invalidate in progress In the process of invalidating pages under this glock I Initial Set when DLM lock is associated with this glock l Locked The glock is in the process of changing state L LRU Set when the glock is on the LRU list` o Object Set when the glock is associated with an object (that is, an inode for type 2 glocks, and a resource group for type 3 glocks) p Demote in progress The glock is in the process of responding to a demote request q Queued Set when a holder is queued to a glock, and cleared when the glock is held, but there are no remaining holders. Used as part of the algorithm the calculates the minimum hold time for a glock. r Reply pending Reply received from remote node is awaiting processing y Dirty Data needs flushing to disk before releasing this glock When a remote callback is received from a node that wants to get a lock in a mode that conflicts with that being held on the local node, then one or other of the two flags D (demote) or d (demote pending) is set. In order to prevent starvation conditions when there is contention on a particular lock, each lock is assigned a minimum hold time. A node which has not yet had the lock for the minimum hold time is allowed to retain that lock until the time interval has expired. If the time interval has expired, then the D (demote) flag will be set and the state required will be recorded. In that case the time there are no granted locks on the holders queue, the lock will be demoted. If the time interval has not expired, then the d (demote pending) flag is set instead. This also schedules the state machine to clear d (demote pending) and set D (demote) when the minimum hold time has expired. The I (initial) flag is set when the glock has been assigned a DLM lock. This happens when the glock is first used and the I flag will then remain set until the glock is finally freed (which the DLM lock is unlocked). 9.5. Glock holders The following table shows the meanings of the different glock holder flags. Table 9.5. Glock Holder Flags Flag Name Meaning a Async Do not wait for glock result (will poll for result later) A Any Any compatible lock mode is acceptable c No cache When unlocked, demote DLM lock immediately e No expire Ignore subsequent lock cancel requests E Exact Must have exact lock mode F First Set when holder is the first to be granted for this lock H Holder Indicates that requested lock is granted p Priority Enqueue holder at the head of the queue t Try A "try" lock T Try 1CB A "try" lock that sends a callback W Wait Set while waiting for request to complete The most important holder flags are H (holder) and W (wait) as mentioned earlier, since they are set on granted lock requests and queued lock requests respectively. The ordering of the holders in the list is important. If there are any granted holders, they will always be at the head of the queue, followed by any queued holders. If there are no granted holders, then the first holder in the list will be the one that triggers the state change. Since demote requests are always onsidered higher priority than requests from the file system, that might not always directly result in a change to the state requested. The glock subsystem supports two kinds of "try" lock. These are useful both because they allow the taking of locks out of the normal order (with suitable back-off and retry) and because they can be used to help avoid resources in use by other nodes. The normal t (try) lock is just what its name indicates; it is a "try" lock that does not do anything special. The T ( try 1CB ) lock, on the other hand, is identical to the t lock except that the DLM will send a single callback to current incompatible lock holders. One use of the T ( try 1CB ) lock is with the iopen locks, which are used to arbitrate among the nodes when an inode's i_nlink count is zero, and determine which of the nodes will be responsible for deallocating the inode. The iopen glock is normally held in the shared state, but when the i_nlink count becomes zero and ->evict_inode () is called, it will request an exclusive lock with T ( try 1CB ) set. It will continue to deallocate the inode if the lock is granted. If the lock is not granted it will result in the node(s) which were preventing the grant of the lock marking their glock(s) with the D (demote) flag, which is checked at ->drop_inode () time in order to ensure that the deallocation is not forgotten. This means that inodes that have zero link count but are still open will be deallocated by the node on which the final close () occurs. Also, at the same time as the inode's link count is decremented to zero the inode is marked as being in the special state of having zero link count but still in use in the resource group bitmap. This functions like the ext3 file system3's orphan list in that it allows any subsequent reader of the bitmap to know that there is potentially space that might be reclaimed, and to attempt to reclaim it. 9.6. Glock tracepoints The tracepoints are also designed to be able to confirm the correctness of the cache control by combining them with the blktrace output and with knowledge of the on-disk layout. It is then possible to check that any given I/O has been issued and completed under the correct lock, and that no races are present. The gfs2_glock_state_change tracepoint is the most important one to understand. It tracks every state change of the glock from initial creation right through to the final demotion which ends with gfs2_glock_put and the final NL to unlocked transition. The l (locked) glock flag is always set before a state change occurs and will not be cleared until after it has finished. There are never any granted holders (the H glock holder flag) during a state change. If there are any queued holders, they will always be in the W (waiting) state. When the state change is complete then the holders may be granted which is the final operation before the l glock flag is cleared. The gfs2_demote_rq tracepoint keeps track of demote requests, both local and remote. Assuming that there is enough memory on the node, the local demote requests will rarely be seen, and most often they will be created by umount or by occasional memory reclaim. The number of remote demote requests is a measure of the contention between nodes for a particular inode or resource group. The gfs2_glock_lock_time tracepoint provides information about the time taken by requests to the DLM. The blocking ( b ) flag was introduced into the glock specifically to be used in combination with this tracepoint. When a holder is granted a lock, gfs2_promote is called, this occurs as the final stages of a state change or when a lock is requested which can be granted immediately due to the glock state already caching a lock of a suitable mode. If the holder is the first one to be granted for this glock, then the f (first) flag is set on that holder. This is currently used only by resource groups. 9.7. Bmap tracepoints Block mapping is a task central to any file system. GFS2 uses a traditional bitmap-based system with two bits per block. The main purpose of the tracepoints in this subsystem is to allow monitoring of the time taken to allocate and map blocks. The gfs2_bmap tracepoint is called twice for each bmap operation: once at the start to display the bmap request, and once at the end to display the result. This makes it easy to match the requests and results together and measure the time taken to map blocks in different parts of the file system, different file offsets, or even of different files. It is also possible to see what the average extent sizes being returned are in comparison to those being requested. The gfs2_rs tracepoint traces block reservations as they are created, used, and destroyed in the block allocator. To keep track of allocated blocks, gfs2_block_alloc is called not only on allocations, but also on freeing of blocks. Since the allocations are all referenced according to the inode for which the block is intended, this can be used to track which physical blocks belong to which files in a live file system. This is particularly useful when combined with blktrace , which will show problematic I/O patterns that may then be referred back to the relevant inodes using the mapping gained by means this tracepoint. Direct I/O ( iomap ) is an alternative cache policy which allows file data transfers to happen directly between disk and the user's buffer. This has benefits in situations where cache hit rate is expected to be low. Both gfs2_iomap_start and gfs2_iomap_end tracepoints trace these operations and can be used to keep track of mapping using Direct I/O, the positions on the file system of the Direct I/O along with the operation type. 9.8. Log tracepoints The tracepoints in this subsystem track blocks being added to and removed from the journal ( gfs2_pin ), as well as the time taken to commit the transactions to the log ( gfs2_log_flush ). This can be very useful when trying to debug journaling performance issues. The gfs2_log_blocks tracepoint keeps track of the reserved blocks in the log, which can help show if the log is too small for the workload, for example. The gfs2_ail_flush tracepoint is similar to the gfs2_log_flush tracepoint in that it keeps track of the start and end of flushes of the AIL list. The AIL list contains buffers which have been through the log, but have not yet been written back in place and this is periodically flushed in order to release more log space for use by the file system, or when a process requests a sync or fsync . 9.9. Glock statistics GFS2 maintains statistics that can help track what is going on within the file system. This allows you to spot performance issues. GFS2 maintains two counters: dcount , which counts the number of DLM operations requested. This shows how much data has gone into the mean/variance calculations. qcount , which counts the number of syscall level operations requested. Generally qcount will be equal to or greater than dcount . In addition, GFS2 maintains three mean/variance pairs. The mean/variance pairs are smoothed exponential estimates and the algorithm used is the one used to calculate round trip times in network code. The mean and variance pairs maintained in GFS2 are not scaled, but are in units of integer nanoseconds. srtt/srttvar: Smoothed round trip time for non-blocking operations srttb/srttvarb: Smoothed round trip time for blocking operations irtt/irttvar: Inter-request time (for example, time between DLM requests) A non-blocking request is one which will complete right away, whatever the state of the DLM lock in question. That currently means any requests when (a) the current state of the lock is exclusive (b) the requested state is either null or unlocked or (c) the "try lock" flag is set. A blocking request covers all the other lock requests. Larger times are better for IRTTs, whereas smaller times are better for the RTTs. Statistics are kept in two sysfs files: The glstats file. This file is similar to the glocks file, except that it contains statistics, with one glock per line. The data is initialized from "per cpu" data for that glock type for which the glock is created (aside from counters, which are zeroed). This file may be very large. The lkstats file. This contains "per cpu" stats for each glock type. It contains one statistic per line, in which each column is a cpu core. There are eight lines per glock type, with types following on from each other. 9.10. References For more information about tracepoints and the GFS2 glocks file, see the following resources: For information about glock internal locking rules, see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/gfs2-glocks.rst . For information about event tracing, see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/trace/events.rst . For information about the trace-cmd utility, see http://lwn.net/Articles/341902/ .
[ "ls enable gfs2_bmap gfs2_glock_queue gfs2_log_flush filter gfs2_demote_rq gfs2_glock_state_change gfs2_pin gfs2_block_alloc gfs2_glock_put gfs2_log_blocks gfs2_promote", "echo -n 1 >/sys/kernel/debug/tracing/events/gfs2/enable", "cat /sys/kernel/debug/tracing/trace", "G: s:SH n:5/75320 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:3/258028 f:yI t:EX d:EX/0 a:3 r:4 H: s:EX f:tH e:0 p:4466 [postmark] gfs2_inplace_reserve_i+0x177/0x780 [gfs2] R: n:258028 f:05 b:22256/22256 i:16800 G: s:EX n:2/219916 f:yfI t:EX d:EX/0 a:0 r:3 I: n:75661/219916 t:8 f:0x10 d:0x00000000 s:7522/7522 G: s:SH n:5/127205 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:2/50382 f:yfI t:EX d:EX/0 a:0 r:2 G: s:SH n:5/302519 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/313874 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/271916 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/312732 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/con_gfs2-tracepoints-configuring-gfs2-file-systems
Chapter 26. Working with GRUB 2
Chapter 26. Working with GRUB 2 Red Hat Enterprise Linux 7 is distributed with version 2 of the GNU GRand Unified Bootloader (GRUB 2), which allows the user to select an operating system or kernel to be loaded at system boot time. GRUB 2 also allows the user to pass arguments to the kernel. 26.1. Introduction to GRUB 2 GRUB 2 reads its configuration from the /boot/grub2/grub.cfg file on traditional BIOS-based machines and from the /boot/efi/EFI/redhat/grub.cfg file on UEFI machines. This file contains menu information. The GRUB 2 configuration file, grub.cfg , is generated during installation, or by invoking the /usr/sbin/grub2-mkconfig utility, and is automatically updated by grubby each time a new kernel is installed. When regenerated manually using grub2-mkconfig , the file is generated according to the template files located in /etc/grub.d/ , and custom settings in the /etc/default/grub file. Edits of grub.cfg will be lost any time grub2-mkconfig is used to regenerate the file, so care must be taken to reflect any manual changes in /etc/default/grub as well. Normal operations on grub.cfg , such as the removal and addition of new kernels, should be done using the grubby tool and, for scripts, using new-kernel-pkg tool. If you use grubby to modify the default kernel the changes will be inherited when new kernels are installed. For more information on grubby , see Section 26.4, "Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool" . The /etc/default/grub file is used by the grub2-mkconfig tool, which is used by anaconda when creating grub.cfg during the installation process, and can be used in the event of a system failure, for example if the boot loader configurations need to be recreated. In general, it is not recommended to replace the grub.cfg file by manually running grub2-mkconfig except as a last resort. Note that any manual changes to /etc/default/grub require rebuilding the grub.cfg file. Menu Entries in grub.cfg Among various code snippets and directives, the grub.cfg configuration file contains one or more menuentry blocks, each representing a single GRUB 2 boot menu entry. These blocks always start with the menuentry keyword followed by a title, list of options, and an opening curly bracket, and end with a closing curly bracket. Anything between the opening and closing bracket should be indented. For example, the following is a sample menuentry block for Red Hat Enterprise Linux 7 with Linux kernel 3.8.0-0.40.el7.x86_64: Each menuentry block that represents an installed Linux kernel contains linux on 64-bit IBM POWER Series, linux16 on x86_64 BIOS-based systems, and linuxefi on UEFI-based systems. Then the initrd directives followed by the path to the kernel and the initramfs image respectively. If a separate /boot partition was created, the paths to the kernel and the initramfs image are relative to /boot . In the example above, the initrd /initramfs-3.8.0-0.40.el7.x86_64.img line means that the initramfs image is actually located at /boot/initramfs-3.8.0-0.40.el7.x86_64.img when the root file system is mounted, and likewise for the kernel path. The kernel version number as given on the linux16 /vmlinuz-kernel_version line must match the version number of the initramfs image given on the initrd /initramfs-kernel_version.img line of each menuentry block. For more information on how to verify the initial RAM disk image, see see Red Hat Enterprise 7 Kernel Administration Guide . Note In menuentry blocks, the initrd directive must point to the location (relative to the /boot/ directory if it is on a separate partition) of the initramfs file corresponding to the same kernel version. This directive is called initrd because the tool which created initial RAM disk images, mkinitrd , created what were known as initrd files. The grub.cfg directive remains initrd to maintain compatibility with other tools. The file-naming convention of systems using the dracut utility to create the initial RAM disk image is initramfs- kernel_version .img . For information on using Dracut , see Red Hat Enterprise 7 Kernel Administration Guide . 26.2. Configuring GRUB 2 Changes to the GRUB 2 menu can be made temporarily at boot time, made persistent for a single system while the system is running, or as part of making a new GRUB 2 configuration file. To make non-persistent changes to the GRUB 2 menu, see Section 26.3, "Making Temporary Changes to a GRUB 2 Menu" . To make persistent changes to a running system, see Section 26.4, "Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool" . For information on making and customizing a GRUB 2 configuration file, see Section 26.5, "Customizing the GRUB 2 Configuration File" . 26.3. Making Temporary Changes to a GRUB 2 Menu Making Temporary Changes to a Kernel Menu Entry To change kernel parameters only during a single boot process, proceed as follows: Start the system and, on the GRUB 2 boot screen, move the cursor to the menu entry you want to edit, and press the e key for edit. Move the cursor down to find the kernel command line. The kernel command line starts with linux on 64-Bit IBM Power Series, linux16 on x86-64 BIOS-based systems, or linuxefi on UEFI systems. Move the cursor to the end of the line. Press Ctrl + a and Ctrl + e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. Edit the kernel parameters as required. For example, to run the system in emergency mode, add the emergency parameter at the end of the linux16 line: The rhgb and quiet parameters can be removed in order to enable system messages. These settings are not persistent and apply only for a single boot. To make persistent changes to a menu entry on a system, use the grubby tool. See the section called "Adding and Removing Arguments from a GRUB 2 Menu Entry" for more information on using grubby . 26.4. Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool The grubby tool can be used to read information from, and make persistent changes to, the grub.cfg file. It enables, for example, changing GRUB 2 menu entries to specify what arguments to pass to a kernel on system start and changing the default kernel. In Red Hat Enterprise Linux 7, if grubby is invoked manually without specifying a GRUB 2 configuration file, it defaults to searching for /etc/grub2.cfg , which is a symbolic link to the grub.cfg file, whose location is architecture dependent. If that file cannot be found it will search for an architecture dependent default. Listing the Default Kernel To find out the file name of the default kernel, enter a command as follows: To find out the index number of the default kernel, enter a command as follows: Changing the Default Boot Entry To make a persistent change in the kernel designated as the default kernel, use the grubby command as follows: Viewing the GRUB 2 Menu Entry for a Kernel To list all the kernel menu entries, enter a command as follows: On UEFI systems, all grubby commands must be entered as root . To view the GRUB 2 menu entry for a specific kernel, enter a command as follows: Try tab completion to see the available kernels within the /boot/ directory. Adding and Removing Arguments from a GRUB 2 Menu Entry The --update-kernel option can be used to update a menu entry when used in combination with --args to add new arguments and --remove-arguments to remove existing arguments. These options accept a quoted space-separated list. The command to simultaneously add and remove arguments a from GRUB 2 menu entry has the follow format: To add and remove arguments from a kernel's GRUB 2 menu entry, use a command as follows: This command removes the Red Hat graphical boot argument, enables boot message to be seen, and adds a serial console. As the console arguments will be added at the end of the line, the new console will take precedence over any other consoles configured. To review the changes, use the --info command option as follows: Updating All Kernel Menus with the Same Arguments To add the same kernel boot arguments to all the kernel menu entries, enter a command as follows: The --update-kernel parameter also accepts DEFAULT or a comma separated list of kernel index numbers. Changing a Kernel Argument To change a value in an existing kernel argument, specify the argument again, changing the value as required. For example, to change the virtual console font size, use a command as follows: See the grubby(8) manual page for more command options. 26.5. Customizing the GRUB 2 Configuration File GRUB 2 scripts search the user's computer and build a boot menu based on what operating systems the scripts find. To reflect the latest system boot options, the boot menu is rebuilt automatically when the kernel is updated or a new kernel is added. However, users may want to build a menu containing specific entries or to have the entries in a specific order. GRUB 2 allows basic customization of the boot menu to give users control of what actually appears on the screen. GRUB 2 uses a series of scripts to build the menu; these are located in the /etc/grub.d/ directory. The following files are included: 00_header , which loads GRUB 2 settings from the /etc/default/grub file. 01_users , which reads the superuser password from the user.cfg file. In Red Hat Enterprise Linux 7.0 and 7.1, this file was only created when boot password was defined in the kickstart file during installation, and it included the defined password in plain text. 10_linux , which locates kernels in the default partition of Red Hat Enterprise Linux. 30_os-prober , which builds entries for operating systems found on other partitions. 40_custom , a template, which can be used to create additional menu entries. Scripts from the /etc/grub.d/ directory are read in alphabetical order and can be therefore renamed to change the boot order of specific menu entries. Important To hide the list of bootable kernels, do not set GRUB_TIMEOUT to 0 in /etc/default/grub . With such setting, the system always boots immediately on the default menu entry, and if the default kernel fails to boot, it is not possible to boot an older kernel. Instead, to prevent GRUB 2 from displaying the list of bootable kernels when the system starts up, set the GRUB_TIMEOUT_STYLE option in the /etc/default/grub file as follows: To display the list when booting, press and hold any alphanumeric key when the BIOS information is displayed using the keyboard or other serial console, and GRUB 2 will present you with the GRUB 2 menu. 26.5.1. Changing the Default Boot Entry By default, the key for the GRUB_DEFAULT directive in the /etc/default/grub file is the word saved . This instructs GRUB 2 to load the kernel specified by the saved_entry directive in the GRUB 2 environment file, located at /boot/grub2/grubenv . You can set another GRUB 2 record to be the default, using the grub2-set-default command, which will update the GRUB 2 environment file. By default, the saved_entry value is set to the name of latest installed kernel of package type kernel . This is defined in /etc/sysconfig/kernel by the UPDATEDEFAULT and DEFAULTKERNEL directives. The file can be viewed by the root user as follows: The DEFAULTKERNEL directive specifies what package type will be used as the default. Installing a package of type kernel-debug will not change the default kernel while the DEFAULTKERNEL is set to package type kernel . GRUB 2 supports using a numeric value as the key for the saved_entry directive to change the default order in which the operating systems are loaded. To specify which operating system should be loaded first, pass its number to the grub2-set-default command. For example: Note that the position of a menu entry in the list is denoted by a number starting with zero; therefore, in the example above, the third entry will be loaded. This value will be overwritten by the name of the kernel to be installed. To force a system to always use a particular menu entry, use the menu entry name as the key to the GRUB_DEFAULT directive in the /etc/default/grub file. To list the available menu entries, run the following command as root : The file name /etc/grub2.cfg is a symbolic link to the grub.cfg file, whose location is architecture dependent. For reliability reasons, the symbolic link is not used in other examples in this chapter. It is better to use absolute paths when writing to a file, especially when repairing a system. Changes to /etc/default/grub require rebuilding the grub.cfg file as follows: On BIOS-based machines, issue the following command as root : On UEFI-based machines, issue the following command as root : 26.5.2. Editing a Menu Entry If required to prepare a new GRUB 2 file with different parameters, edit the values of the GRUB_CMDLINE_LINUX key in the /etc/default/grub file. Note that you can specify multiple parameters for the GRUB_CMDLINE_LINUX key, similarly to adding the parameters in the GRUB 2 boot menu. For example: Where console=tty0 is the first virtual terminal and console=ttyS0 is the serial terminal to be used. Changes to /etc/default/grub require rebuilding the grub.cfg file as follows: On BIOS-based machines, issue the following command as root : On UEFI-based machines, issue the following command as root : 26.5.3. Adding a new Entry When executing the grub2-mkconfig command, GRUB 2 searches for Linux kernels and other operating systems based on the files located in the /etc/grub.d/ directory. The /etc/grub.d/10_linux script searches for installed Linux kernels on the same partition. The /etc/grub.d/30_os-prober script searches for other operating systems. Menu entries are also automatically added to the boot menu when updating the kernel. The 40_custom file located in the /etc/grub.d/ directory is a template for custom entries and looks as follows: This file can be edited or copied. Note that as a minimum, a valid menu entry must include at least the following: 26.5.4. Creating a Custom Menu If you do not want menu entries to be updated automatically, you can create a custom menu. Important Before proceeding, back up the contents of the /etc/grub.d/ directory in case you need to revert the changes later. Note Note that modifying the /etc/default/grub file does not have any effect on creating custom menus. On BIOS-based machines, copy the contents of /boot/grub2/grub.cfg , or, on UEFI machines, copy the contents of /boot/efi/EFI/redhat/grub.cfg . Put the content of the grub.cfg into the /etc/grub.d/40_custom file below the existing header lines. The executable part of the 40_custom script has to be preserved. From the content put into the /etc/grub.d/40_custom file, only the menuentry blocks are needed to create the custom menu. The /boot/grub2/grub.cfg and /boot/efi/EFI/redhat/grub.cfg files might contain function specifications and other content above and below the menuentry blocks. If you put these unnecessary lines into the 40_custom file in the step, erase them. This is an example of a custom 40_custom script: Remove all files from the /etc/grub.d/ directory except the following: 00_header , 40_custom , 01_users (if it exists), and README . Alternatively, if you want to keep the files in the /etc/grub2.d/ directory, make them unexecutable by running the chmod a-x <file_name> command. Edit, add, or remove menu entries in the 40_custom file as desired. Rebuild the grub.cfg file by running the grub2-mkconfig -o command as follows: On BIOS-based machines, issue the following command as root : On UEFI-based machines, issue the following command as root : 26.6. Protecting GRUB 2 with a Password GRUB 2 offers two types of password protection: Password is required for modifying menu entries but not for booting existing menu entries; Password is required for modifying menu entries and for booting one, several, or all menu entries. Configuring GRUB 2 to Require a Password only for Modifying Entries To require password authentication for modifying GRUB 2 entries, follow these steps: Run the grub2-setpassword command as root: Enter and confirm the password: Following this procedure creates a /boot/grub2/user.cfg file that contains the hash of the password. The user for this password, root , is defined in the /boot/grub2/grub.cfg file. With this change, modifying a boot entry during booting requires you to specify the root user name and your password. Configuring GRUB 2 to Require a Password for Modifying and Booting Entries Setting a password using the grub2-setpassword prevents menu entries from unauthorized modification but not from unauthorized booting. To also require password for booting an entry, follow these steps after setting the password with grub2-setpassword : Warning If you forget your GRUB 2 password, you will not be able to boot the entries you reconfigure in the following procedure. Open the /boot/grub2/grub.cfg file. Find the boot entry that you want to protect with password by searching for lines beginning with menuentry . Delete the --unrestricted parameter from the menu entry block, for example: Save and close the file. Now even booting the entry requires entering the root user name and password. Note Manual changes to the /boot/grub2/grub.cfg persist when new kernel versions are installed, but are lost when re-generating grub.cfg using the grub2-mkconfig command. Therefore, to retain password protection, use the above procedure after every use of grub2-mkconfig . Note If you delete the --unrestricted parameter from every menu entry in the /boot/grub2/grub.cfg file, all newly installed kernels will have menu entry created without --unrestricted and hence automatically inherit the password protection. Passwords Set Before Updating to Red Hat Enterprise Linux 7.2 The grub2-setpassword tool was added in Red Hat Enterprise Linux 7.2 and is now the standard method of setting GRUB 2 passwords. This is in contrast to versions of Red Hat Enterprise Linux, where boot entries needed to be manually specified in the /etc/grub.d/40_custom file, and super users - in the /etc/grub.d/01_users file. Additional GRUB 2 Users Booting entries without the --unrestricted parameter requires the root password. However, GRUB 2 also enables creating additional non-root users that can boot such entries without providing a password. Modifying the entries still requires the root password. For information on creating such users, see the GRUB 2 Manual . 26.7. Reinstalling GRUB 2 Reinstalling GRUB 2 is a convenient way to fix certain problems usually caused by an incorrect installation of GRUB 2, missing files, or a broken system. Other reasons to reinstall GRUB 2 include the following: Upgrading from the version of GRUB. The user requires the GRUB 2 boot loader to control installed operating systems. However, some operating systems are installed with their own boot loaders. Reinstalling GRUB 2 returns control to the desired operating system. Adding the boot information to another drive. 26.7.1. Reinstalling GRUB 2 on BIOS-Based Machines When using the grub2-install command, the boot information is updated and missing files are restored. Note that the files are restored only if they are not corrupted. Use the grub2-install device command to reinstall GRUB 2 if the system is operating normally. For example, if sda is your device : 26.7.2. Reinstalling GRUB 2 on UEFI-Based Machines When using the yum reinstall grub2-efi shim command, the boot information is updated and missing files are restored. Note that the files are restored only if they are not corrupted. Use the yum reinstall grub2-efi shim command to reinstall GRUB 2 if the system is operating normally. For example: 26.7.3. Resetting and Reinstalling GRUB 2 This method completely removes all GRUB 2 configuration files and system settings. Apply this method to reset all configuration settings to their default values. Removing of the configuration files and subsequent reinstalling of GRUB 2 fixes failures caused by corrupted files and incorrect configuration. To do so, as root , follow these steps: Run the rm /etc/grub.d/* command; Run the rm /etc/sysconfig/grub command; For EFI systems only , run the following command: For BIOS and EFI systems, run this command: Rebuild the grub.cfg file by running the grub2-mkconfig -o command as follows: On BIOS-based machines, issue the following command as root : On UEFI-based machines, issue the following command as root : Now follow the procedure in Section 26.7, "Reinstalling GRUB 2" to restore GRUB 2 on the /boot/ partition. 26.8. Upgrading from GRUB Legacy to GRUB 2 When you do an in-place upgrade of Red Hat Enterprise Linux (RHEL) from version 6 to 7, the upgrade from GRUB Legacy to GRUB 2 does not happen automatically, but it should be done manually. Perform the GRUB upgrade for these reasons: In RHEL 7 and later versions, GRUB Legacy is no longer maintained and does not receive updates. GRUB Legacy is unable to boot on systems without the /boot/ directory. GRUB 2 has more features and is more reliable. GRUB 2 supports more hardware configurations, file systems, and drive layouts. Upgrading from GRUB Legacy to GRUB 2 after the in-place upgrade of the operating system Upgrading from GRUB Legacy to GRUB 2 Ensure that the GRUB Legacy package has been uninstalled by the Red Hat Upgrade Tool : Note Uninstalling the grub2 package does not affect the installed GRUB Legacy bootloader. Make sure that the grub2 package has been installed. If grub2 is not on the system after the upgrade to RHEL 7, you can install it manually by running: If the system boots using EFI, install the following packages if they are missing: Generating the GRUB 2 configuration files This section describes how to add a GRUB 2 configuration without removing the original GRUB Legacy configuration. The procedure keeps the GRUB Legacy configuration in case that GRUB 2 does not work correctly. Manually create the /etc/default/grub file by using one of the following options: Create the /etc/default/grub file. For details, see the How to re-create the missing /etc/default/grub file in Red Hat Enterprise Linux 7? article. Copy the /etc/default/grub file from a similar Red Hat Enterprise Linux 7 system, and adjust the file accordingly. Depending on your bootloader: If the system boots using the legacy BIOS, install GRUB 2 specifing the install device: The grub2-install command installs GRUB images into the /boot/grub target directory. The --grub-setup=/bin/true option ensures that the old GRUB Legacy configuration is not deleted. If the system boots using EFI, create a boot entry for the shim bootloader and change the BootOrder variable to make the firmware boot GRUB 2 through shim : Substitute /dev/device_name with the bootable device file. Warning Note the difference in the configuration file extensions: .conf is for GRUB .cfg is for GRUB 2 Do not overwrite the old GRUB configuration file by mistake in the step. Generate the GRUB 2 configuration file: If the system uses the legacy BIOS: If the system uses EFI: Note For customizing the generated GRUB 2 configuration file, see Section 26.5, "Customizing the GRUB 2 Configuration File" . You should make changes in /etc/default/grub , not directly in /boot/grub2/grub.cfg . Otherwise, changes in /boot/grub2/grub.cfg are lost every time the file is re-generated. Testing GRUB 2 with GRUB Legacy bootloader still installed This section describes how to test GRUB 2 without removing the GRUB Legacy configuration. The GRUB Legacy configuration needs to stay until GRUB 2 configuration is verified; otherwise the system might become unbootable. To safely test GRUB 2 configuration, we will start GRUB 2 from GRUB Legacy . Note This section applies only to the legacy BIOS booting. In case of EFI, boot entries exist for both the old and new bootloaders, and you can boot the old legacy GRUB by choosing the boot entry with the EFI firmware settings. Add a new section into /boot/grub/grub.conf . For systems with a separate /boot partition, use: Substitute (hd0,0) with the GRUB Legacy bootable device designation. For systems without a separate /boot partition, use: Substitute (hd0,0) with the GRUB Legacy bootable device designation. Reboot the system. When presented with a GRUB Legacy menu, select the GRUB 2 Test entry. When presented with a GRUB 2 menu, select a kernel to boot. If the above did not work, restart, and do not choose the GRUB 2 Test entry on the boot. Replacing GRUB Legacy bootloader on systems that use BIOS If GRUB 2 works successfully: Replace the GRUB Legacy bootloader with the GRUB 2 bootloader: Remove the old GRUB Legacy configuration file: Reboot the system: Removing GRUB Legacy on systems that use EFI If GRUB 2 works successfully: Check the content of the /boot/efi/EFI/redhat/ directory and remove obsoleted files related only to the Legacy GRUB: If you performed an in-place upgrade from RHEL 6 to RHEL 7 using the Preupgrade Assistant and Red Hat Upgrade Tool utilities, remove also backup copies of the files mentioned above that end with the .preupg suffix: Find the old boot entry that refers to the \EFI\redhat\grub.efi file using the efibootmgr command: Example output: The entry number in this case is 0001 . Remove the identified boot entry. The following command removes the boot entry from the example above: If you have more than one such boot entry, remove all identified old boot entries. Warning After upgrading to RHEL7 from an older release, such as RHEL6, the operating system is unsupported until manual upgrade of the GRUB Legacy bootloader to GRUB 2 has been successfully finished. This is due to the fact that installation of packages across major releases is not supported. In case of RHEL 7, only GRUB 2 is supported, developed, and tested; unlike GRUB Legacy from RHEL 6. 26.9. GRUB 2 over a Serial Console This section describes how to configure GRUB 2 for serial communications on machines with no display or keyboard. To access the GRUB 2 terminal over a serial connection, an additional option must be added to a kernel definition to make that particular kernel monitor a serial connection. For example: Where console=ttyS0 is the serial terminal to be used, 9600 is the baud rate, n is for no parity, and 8 is the word length in bits. A much higher baud rate, for example 115200 , is preferable for tasks such as following log files. For more information on serial console settings, see the section called "Installable and External Documentation" 26.9.1. Configuring GRUB 2 for a single boot To set the system to use a serial terminal only during a single boot process, when the GRUB 2 boot menu appears, move the cursor to the kernel you want to start, and press the e key to edit the kernel parameters. Remove the rhgb and quiet parameters and add console parameters at the end of the linux16 line as follows: These settings are not persistent and apply only for a single boot. 26.9.2. Configuring GRUB 2 for a persistent change To make persistent changes to a menu entry on a system, use the grubby tool. For example, to update the entry for the default kernel, enter a command as follows: The --update-kernel parameter also accepts the keyword ALL or a comma separated list of kernel index numbers. See the section called "Adding and Removing Arguments from a GRUB 2 Menu Entry" for more information on using grubby . 26.9.3. Configuring a new GRUB 2 file If required to build a new GRUB 2 configuration file, add the following two lines in the /etc/default/grub file: The first line disables the graphical terminal. Note that specifying the GRUB_TERMINAL key overrides values of GRUB_TERMINAL_INPUT and GRUB_TERMINAL_OUTPUT . On the second line, adjust the baud rate, parity, and other values to fit your environment and hardware. A much higher baud rate, for example 115200 , is preferable for tasks such as following log files. Once you have completed the changes in the /etc/default/grub file, it is necessary to update the GRUB 2 configuration file. Rebuild the grub.cfg file by running the grub2-mkconfig -o command as follows: On BIOS-based machines, issue the following command as root : On UEFI-based machines, issue the following command as root : 26.9.4. Using screen to Connect to the Serial Console The screen tool serves as a capable serial terminal. To install it, run as root : To connect to your machine using the serial console, use a command in the follow format: By default, if no option is specified, screen uses the standard 9600 baud rate. To set a higher baud rate, enter: Where console_port is ttyS0 , or ttyUSB0 , and so on. To end the session in screen , press Ctrl + a , type :quit and press Enter . See the screen(1) manual page for additional options and detailed information. 26.10. Terminal Menu Editing During Boot Menu entries can be modified and arguments passed to the kernel on boot. This is done using the menu entry editor interface, which is triggered when pressing the e key on a selected menu entry in the boot loader menu. The Esc key discards any changes and reloads the standard menu interface. The c key loads the command line interface. The command line interface is the most basic GRUB 2 interface, but it is also the one that grants the most control. The command line makes it possible to type any relevant GRUB 2 commands followed by the Enter key to execute them. This interface features some advanced features similar to shell , including Tab key completion based on context, and Ctrl + a to move to the beginning of a line and Ctrl + e to move to the end of a line. In addition, the arrow , Home , End , and Delete keys work as they do in the bash shell. 26.10.1. Booting to Rescue Mode Rescue mode provides a convenient single-user environment and allows you to repair your system in situations when it is unable to complete a normal booting process. In rescue mode, the system attempts to mount all local file systems and start some important system services, but it does not activate network interfaces or allow more users to be logged into the system at the same time. In Red Hat Enterprise Linux 7, rescue mode is equivalent to single user mode and requires the root password. To enter rescue mode during boot, on the GRUB 2 boot screen, press the e key for edit. Add the following parameter at the end of the linux line on 64-Bit IBM Power Series, the linux16 line on x86-64 BIOS-based systems, or the linuxefi line on UEFI systems: Press Ctrl + a and Ctrl + e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. Note that equivalent parameters, 1 , s , and single , can be passed to the kernel as well. Press Ctrl + x to boot the system with the parameter. 26.10.2. Booting to Emergency Mode Emergency mode provides the most minimal environment possible and allows you to repair your system even in situations when the system is unable to enter rescue mode. In emergency mode, the system mounts the root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts few essential services. In Red Hat Enterprise Linux 7, emergency mode requires the root password. To enter emergency mode, on the GRUB 2 boot screen, press the e key for edit. Add the following parameter at the end of the linux line on 64-Bit IBM Power Series, the linux16 line on x86-64 BIOS-based systems, or the linuxefi line on UEFI systems: Press Ctrl + a and Ctrl + e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. Note that equivalent parameters, emergency and -b , can be passed to the kernel as well. Press Ctrl + x to boot the system with the parameter. 26.10.3. Booting to the Debug Shell The systemd debug shell provides a shell very early in the startup process that can be used to diagnose systemd related boot-up problems. Once in the debug shell, systemctl commands such as systemctl list-jobs , and systemctl list-units can be used to look for the cause of boot problems. In addition, the debug option can be added to the kernel command line to increase the number of log messages. For systemd , the kernel command-line option debug is now a shortcut for systemd.log_level=debug . Adding the Debug Shell Command To activate the debug shell only for this session, proceed as follows: On the GRUB 2 boot screen, move the cursor to the menu entry you want to edit and press the e key for edit. Add the following parameter at the end of the linux line on 64-Bit IBM Power Series, the linux16 line on x86-64 BIOS-based systems, or the linuxefi line on UEFI systems: Optionally add the debug option. Press Ctrl + a and Ctrl + e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. Press Ctrl + x to boot the system with the parameter. If required, the debug shell can be set to start on every boot by enabling it with the systemctl enable debug-shell command. Alternatively, the grubby tool can be used to make persistent changes to the kernel command line in the GRUB 2 menu. See Section 26.4, "Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool" for more information on using grubby . Warning Permanently enabling the debug shell is a security risk because no authentication is required to use it. Disable it when the debugging session has ended. Connecting to the Debug Shell During the boot process, the systemd-debug-generator will configure the debug shell on TTY9. Press Ctrl + Alt + F9 to connect to the debug shell. If working with a virtual machine, sending this key combination requires support from the virtualization application. For example, if using Virtual Machine Manager , select Send Key Ctrl+Alt+F9 from the menu. The debug shell does not require authentication, therefore a prompt similar to the following should be seen on TTY9: [root@localhost /]# If required, to verify you are in the debug shell, enter a command as follows: To return to the default shell, if the boot succeeded, press Ctrl + Alt + F1 . To diagnose start up problems, certain systemd units can be masked by adding systemd.mask= unit_name one or more times on the kernel command line. To start additional processes during the boot process, add systemd.wants= unit_name to the kernel command line. The systemd-debug-generator(8) manual page describes these options. 26.10.4. Changing and Resetting the Root Password Setting up the root password is a mandatory part of the Red Hat Enterprise Linux 7 installation. If you forget or lose the root password it is possible to reset it, however users who are members of the wheel group can change the root password as follows: Note that in GRUB 2, resetting the password is no longer performed in single-user mode as it was in GRUB included in Red Hat Enterprise Linux 6. The root password is now required to operate in single-user mode as well as in emergency mode. Two procedures for resetting the root password are shown here: Resetting the Root Password Using an Installation Disk takes you to a shell prompt, without having to edit the GRUB 2 menu. It is the shorter of the two procedures and it is also the recommended method. You can use a boot disk or a normal Red Hat Enterprise Linux 7 installation disk. Resetting the Root Password Using rd.break makes use of rd.break to interrupt the boot process before control is passed from initramfs to systemd . The disadvantage of this method is that it requires more steps, includes having to edit the GRUB 2 menu, and involves choosing between a possibly time consuming SELinux file relabel or changing the SELinux enforcing mode and then restoring the SELinux security context for /etc/shadow/ when the boot completes. Resetting the Root Password Using an Installation Disk Start the system and when BIOS information is displayed, select the option for a boot menu and select to boot from the installation disk. Choose Troubleshooting . Choose Rescue a Red Hat Enterprise Linux System . Choose Continue which is the default option. At this point you will be promoted for a passphrase if an encrypted file system is found. Press OK to acknowledge the information displayed until the shell prompt appears. Change the file system root as follows: Enter the passwd command and follow the instructions displayed on the command line to change the root password. Remove the autorelable file to prevent a time consuming SELinux relabel of the disk: Enter the exit command to exit the chroot environment. Enter the exit command again to resume the initialization and finish the system boot. Resetting the Root Password Using rd.break Start the system and, on the GRUB 2 boot screen, press the e key for edit. Remove the rhgb and quiet parameters from the end, or near the end, of the linux16 line, or linuxefi on UEFI systems. Press Ctrl + a and Ctrl + e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. Important The rhgb and quiet parameters must be removed in order to enable system messages. Add the following parameters at the end of the linux line on 64-Bit IBM Power Series, the linux16 line on x86-64 BIOS-based systems, or the linuxefi line on UEFI systems: Adding the enforcing=0 option enables omitting the time consuming SELinux relabeling process. The initramfs will stop before passing control to the Linux kernel , enabling you to work with the root file system. Note that the initramfs prompt will appear on the last console specified on the Linux line. Press Ctrl + x to boot the system with the changed parameters. With an encrypted file system, a password is required at this point. However the password prompt might not appear as it is obscured by logging messages. You can press the Backspace key to see the prompt. Release the key and enter the password for the encrypted file system, while ignoring the logging messages. The initramfs switch_root prompt appears. The file system is mounted read-only on /sysroot/ . You will not be allowed to change the password if the file system is not writable. Remount the file system as writable: The file system is remounted with write enabled. Change the file system's root as follows: The prompt changes to sh-4.2# . Enter the passwd command and follow the instructions displayed on the command line to change the root password. Note that if the system is not writable, the passwd tool fails with the following error: Updating the password file results in a file with the incorrect SELinux security context. To relabel all files on system boot, enter the following command: Alternatively, to save the time it takes to relabel a large disk, you can omit this step provided you included the enforcing=0 option in step 3. Remount the file system as read only: Enter the exit command to exit the chroot environment. Enter the exit command again to resume the initialization and finish the system boot. With an encrypted file system, a pass word or phrase is required at this point. However the password prompt might not appear as it is obscured by logging messages. You can press and hold the Backspace key to see the prompt. Release the key and enter the password for the encrypted file system, while ignoring the logging messages. Note Note that the SELinux relabeling process can take a long time. A system reboot will occur automatically when the process is complete. If you added the enforcing=0 option in step 3 and omitted the touch /.autorelabel command in step 8, enter the following command to restore the /etc/shadow file's SELinux security context: Enter the following commands to turn SELinux policy enforcement back on and verify that it is on: 26.11. Unified Extensible Firmware Interface (UEFI) Secure Boot The Unified Extensible Firmware Interface ( UEFI ) Secure Boot technology ensures that the system firmware checks whether the system boot loader is signed with a cryptographic key authorized by a database of public keys contained in the firmware. With signature verification in the -stage boot loader and kernel, it is possible to prevent the execution of kernel space code which has not been signed by a trusted key. A chain of trust is established from the firmware to the signed drivers and kernel modules as follows. The first-stage boot loader, shim.efi , is signed by a UEFI private key and authenticated by a public key, signed by a certificate authority (CA), stored in the firmware database. The shim.efi contains the Red Hat public key, "Red Hat Secure Boot (CA key 1)", which is used to authenticate both the GRUB 2 boot loader, grubx64.efi , and the Red Hat kernel. The kernel in turn contains public keys to authenticate drivers and modules. Secure Boot is the boot path validation component of the Unified Extensible Firmware Interface (UEFI) specification. The specification defines: a programming interface for cryptographically protected UEFI variables in non-volatile storage, how the trusted X.509 root certificates are stored in UEFI variables, validation of UEFI applications like boot loaders and drivers, procedures to revoke known-bad certificates and application hashes. UEFI Secure Boot does not prevent the installation or removal of second-stage boot loaders, nor require explicit user confirmation of such changes. Signatures are verified during booting, not when the boot loader is installed or updated. Therefore, UEFI Secure Boot does not stop boot path manipulations, it helps in the detection of unauthorized changes. A new boot loader or kernel will work as long as it is signed by a key trusted by the system. 26.11.1. UEFI Secure Boot Support in Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 includes support for the UEFI Secure Boot feature, which means that Red Hat Enterprise Linux 7 can be installed and run on systems where UEFI Secure Boot is enabled. On UEFI-based systems with the Secure Boot technology enabled, all drivers that are loaded must be signed with a trusted key, otherwise the system will not accept them. All drivers provided by Red Hat are signed by one of Red Hat's private keys and authenticated by the corresponding Red Hat public key in the kernel. If you want to load externally built drivers, drivers that are not provided on the Red Hat Enterprise Linux DVD, you must make sure these drivers are signed as well. Information on signing custom drivers is available in Red Hat Enterprise Linux 7 Kernel Administration Guide . Restrictions Imposed by UEFI Secure Boot As UEFI Secure Boot support in Red Hat Enterprise Linux 7 is designed to ensure that the system only runs kernel mode code after its signature has been properly authenticated, certain restrictions exist. GRUB 2 module loading is disabled as there is no infrastructure for signing and verification of GRUB 2 modules, which means allowing them to be loaded would constitute execution of untrusted code inside the security perimeter that Secure Boot defines. Instead, Red Hat provides a signed GRUB 2 binary that has all the modules supported on Red Hat Enterprise Linux 7 already included. More detailed information is available in the Red Hat Knowledgebase article Restrictions Imposed by UEFI Secure Boot . 26.12. Additional Resources Please see the following resources for more information on the GRUB 2 boot loader: Installed Documentation /usr/share/doc/grub2-tools- version-number / - This directory contains information about using and configuring GRUB 2. version-number corresponds to the version of the GRUB 2 package installed. info grub2 - The GRUB 2 info page contains a tutorial, a user reference manual, a programmer reference manual, and a FAQ document about GRUB 2 and its usage. grubby(8) - The manual page for the command-line tool for configuring GRUB and GRUB 2. new-kernel-pkg(8) - The manual page for the tool to script kernel installation. Installable and External Documentation /usr/share/doc/kernel-doc- kernel_version /Documentation/serial-console.txt - This file, which is provided by the kernel-doc package, contains information on the serial console. Before accessing the kernel documentation, you must run the following command as root : Red Hat Installation Guide - The Installation Guide provides basic information on GRUB 2, for example, installation, terminology, interfaces, and commands.
[ "menuentry 'Red Hat Enterprise Linux Server' --class red --class gnu-linux --class gnu --class os USDmenuentry_id_option 'gnulinux-simple-c60731dc-9046-4000-9182-64bdcce08616' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos1' if [ xUSDfeature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 19d9e294-65f8-4e37-8e73-d41d6daa6e58 else search --no-floppy --fs-uuid --set=root 19d9e294-65f8-4e37-8e73-d41d6daa6e58 fi echo 'Loading Linux 3.8.0-0.40.el7.x86_64 ...' linux16 /vmlinuz-3.8.0-0.40.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root rhgb quiet echo 'Loading initial ramdisk ...' initrd /initramfs-3.8.0-0.40.el7.x86_64.img }", "linux16 /vmlinuz-3.10.0-0.rc4.59.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root rhgb quiet emergency", "~]# grubby --default-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64", "~]# grubby --default-index 0", "~]# grubby --set-default /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64", "~]USD grubby --info=ALL", "~]USD grubby --info /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 index=0 kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 args=\"ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet LANG=en_US.UTF-8\" root=/dev/mapper/rhel-root initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)", "grubby --remove-args=\" argX argY \" --args=\" argA argB \" --update-kernel /boot/ kernel", "~]# grubby --remove-args=\"rhgb quiet\" --args=console=ttyS0,115200 --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64", "~]# grubby --info /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 index=0 kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 args=\"ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us LANG=en_US.UTF-8 ttyS0,115200 \" root=/dev/mapper/rhel-root initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)", "~]# grubby --update-kernel=ALL --args=console=ttyS0,115200", "~]# grubby --args=vconsole.font=latarcyrheb-sun32 --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 index=0 kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64 args=\"ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font= latarcyrheb-sun32 vconsole.keymap=us LANG=en_US.UTF-8\" root=/dev/mapper/rhel-root initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)", "GRUB_TIMEOUT_STYLE=hidden", "~]# cat /etc/sysconfig/kernel UPDATEDEFAULT specifies if new-kernel-pkg should make new kernels the default UPDATEDEFAULT=yes DEFAULTKERNEL specifies the default kernel package type DEFAULTKERNEL=kernel", "~]# grub2-set-default 2", "~]# awk -F\\' 'USD1==\"menuentry \" {print USD2}' /etc/grub2.cfg", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "GRUB_CMDLINE_LINUX=\"console=tty0 console=ttyS0,9600n8\"", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "#!/bin/sh exec tail -n +3 USD0 This file provides an easy way to add custom menu entries. Simply type the menu entries you want to add after this comment. Be careful not to change the 'exec tail' line above.", "menuentry \"<Title>\" { <Data> }", "#!/bin/sh exec tail -n +3 USD0 This file provides an easy way to add custom menu entries. Simply type the menu entries you want to add after this comment. Be careful not to change the 'exec tail' line above. menuentry 'First custom entry' --class red --class gnu-linux --class gnu --class os USDmenuentry_id_option 'gnulinux-3.10.0-67.el7.x86_64-advanced-32782dd0-4b47-4d56-a740-2076ab5e5976' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos1' if [ xUSDfeature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 7885bba1-8aa7-4e5d-a7ad-821f4f52170a else search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-821f4f52170a fi linux16 /vmlinuz-3.10.0-67.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=rhel/swap vconsole.keymap=us crashkernel=auto rhgb quiet LANG=en_US.UTF-8 initrd16 /initramfs-3.10.0-67.el7.x86_64.img } menuentry 'Second custom entry' --class red --class gnu-linux --class gnu --class os USDmenuentry_id_option 'gnulinux-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c-advanced-32782dd0-4b47-4d56-a740-2076ab5e5976' { load_video insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos1' if [ xUSDfeature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 7885bba1-8aa7-4e5d-a7ad-821f4f52170a else search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-821f4f52170a fi linux16 /vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=rhel/swap vconsole.keymap=us crashkernel=auto rhgb quiet initrd16 /initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img }", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "~]# grub2-setpassword", "Enter password: Confirm password:", "[file contents truncated] menuentry 'Red Hat Enterprise Linux Server (3.10.0-327.18.2.rt56.223.el7_2.x86_64) 7.2 (Maipo)' --class red --class gnu-linux --class gnu --class os --unrestricted USDmenuentry_id_option 'gnulinux-3.10.0-327.el7.x86_64-advanced-c109825c-de2f-4340-a0ef-4f47d19fe4bf' { load_video set gfxpayload=keep [file contents truncated]", "~]# grub2-install /dev/sda", "~]# yum reinstall grub2-efi shim", "~]# yum reinstall grub2-efi shim grub2-tools", "~]# yum reinstall grub2-tools", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "~]# yum remove grub", "~]# yum install grub2", "~]# yum install grub2-efi-x64 shim-x64", "~]# grub2-install /dev/<DEVICE_NAME> --grub-setup=/bin/true", "~]# efibootmgr -c -L 'Red Hat Enterprise Linux 7' -d /dev/device_name -p 1 -l '\\EFI\\redhat\\shimx64.efi'", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "title GRUB 2 Test root (hd0,0) kernel /grub2/i386-pc/core.img boot", "title GRUB 2 Test root (hd0,0) kernel /boot/grub2/i386-pc/core.img boot", "~]# grub2-install /dev/sdX", "~]# rm /boot/grub/grub.conf", "~]# reboot", "~]# rm /boot/efi/EFI/redhat/grub.efi ~]# rm /boot/efi/EFI/redhat/grub.conf", "~]# rm /boot/efi/EFI/redhat/*.preupg", "~]# efibootmgr -v | grep '\\\\EFI\\\\redhat\\\\grub.efi'", "Boot0001* Linux HD(1,GPT,542e410f-cbf2-4cce-9f5d-61c4764a5d54,0x800,0x64000)/File(\\EFI\\redhat\\grub.efi)", "~]# efibootmgr -Bb 0001", "console= ttyS0,9600n8", "linux16 /vmlinuz-3.10.0-0.rc4.59.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root console=ttyS0,9600", "~]# grubby --remove-args=\"rhgb quiet\" --args=console=ttyS0,9600 --update-kernel=DEFAULT", "GRUB_TERMINAL=\"serial\" GRUB_SERIAL_COMMAND=\"serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1\"", "~]# grub2-mkconfig -o /boot/grub2/grub.cfg", "~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "~]# yum install screen", "screen /dev/ console_port baud_rate", "~]USD screen /dev/console_port 115200", "systemd.unit=rescue.target", "systemd.unit=emergency.target", "systemd.debug-shell", "/]# systemctl status USDUSD ● debug-shell.service - Early root shell on /dev/tty9 FOR DEBUGGING ONLY Loaded: loaded (/usr/lib/systemd/system/debug-shell.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2015-08-05 11:01:48 EDT; 2min ago Docs: man:sushell(8) Main PID: 450 (bash) CGroup: /system.slice/debug-shell.service ├─ 450 /bin/bash └─1791 systemctl status 450", "~]USD sudo passwd root", "sh-4.2# chroot /mnt/sysimage", "sh-4.2# rm -f /.autorelabel", "rd.break enforcing=0", "switch_root:/# mount -o remount,rw /sysroot", "switch_root:/# chroot /sysroot", "Authentication token manipulation error", "sh-4.2# touch /.autorelabel", "sh-4.2# mount -o remount,ro /", "~]# restorecon /etc/shadow", "~]# setenforce 1 ~]# getenforce Enforcing", "~]# yum install kernel-doc" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-working_with_the_grub_2_boot_loader
Chapter 9. Playbooks
Chapter 9. Playbooks You can use automation content navigator to interactively troubleshoot your playbook. For more information about troubleshooting a playbook with automation content navigator, see Troubleshooting Ansible content with automation content navigator in the Automation content navigator creator guide Guide.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-playbooks
Chapter 3. Legacy Security Subsystem
Chapter 3. Legacy Security Subsystem 3.1. Configure a Security Domain to use LDAP Security domains can be configured to use an LDAP server for authentication and authorization by using a login module. The basics of security domains and login modules are covered in the JBoss EAP Security Architecture guide . LdapExtended is the preferred login module for integrating with LDAP servers (including Active Directory), but there are several other LDAP login modules that can be used as well. Specifically, the Ldap , AdvancedLdap , and AdvancedAdLdap login modules can also be used to configure a security domain to use LDAP. This section uses the LdapExtended login module to illustrate how to create a security domain that uses LDAP for authentication and authorization, but the other LDAP login modules can be used as well. For more details on the other LDAP login modules, see the JBoss EAP Login Module Reference . Important In cases where the legacy security subsystem uses an LDAP server to perform authentication, JBoss EAP will return a 500 , or internal server error, error code if that LDAP server is unreachable. This behavior differs from versions of JBoss EAP which returned a 401 , or unauthorized, error code under the same conditions. 3.1.1. LdapExtended Login Module LdapExtended ( org.jboss.security.auth.spi.LdapExtLoginModule ) is a login module implementation that uses searches to locate the bind user and associated roles on an LDAP server. The roles query recursively follows DNs to navigate a hierarchical role structure. For the vast majority of cases when using LDAP with security domains, the LdapExtended login module should be used, especially with LDAP implementations that are not Active Directory. For a full list of configuration options for the LdapExtended login module, see the LdapExtended login module section in the JBoss EAP Login Module Reference . The authentication happens as follows: An initial bind to the LDAP server is done using the bindDN and bindCredential options. The bindDN is a LDAP user with the ability to search both the baseCtxDN and rolesCtxDN trees for the user and roles. The user DN to authenticate against is queried using the filter specified by the baseFilter attribute. The resulting user DN is authenticated by binding to the LDAP server using the user DN as the InitialLdapContext environment Context.SECURITY_PRINCIPAL . The Context.SECURITY_CREDENTIALS property is set to the String password obtained by the callback handler. 3.1.1.1. Configure a Security Domain to use the LdapExtended Login Module Example Data (LDIF format) CLI Commands for Adding the LdapExtended Login Module /subsystem=security/security-domain=testLdapExtendedExample:add(cache-type=default) /subsystem=security/security-domain=testLdapExtendedExample/authentication=classic:add /subsystem=security/security-domain=testLdapExtendedExample/authentication=classic/login-module=LdapExtended:add(code=LdapExtended, flag=required, module-options=[ ("java.naming.factory.initial"=>"com.sun.jndi.ldap.LdapCtxFactory"), ("java.naming.provider.url"=>"ldap://localhost:10389"), ("java.naming.security.authentication"=>"simple"), ("bindDN"=>"uid=ldap,ou=Users,dc=jboss,dc=org"), ("bindCredential"=>"randall"), ("baseCtxDN"=>"ou=Users,dc=jboss,dc=org"), ("baseFilter"=>"(uid={0})"), ("rolesCtxDN"=>"ou=groups,dc=jboss,dc=org"), ("roleFilter"=>"(uniqueMember={1})"), ("roleAttributeID"=>"uid")]) reload Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . 3.1.1.1.1. Configure a Security Domain to use the LdapExtended Login Module for Active Directory For Microsoft Active Directory, the LdapExtended login module can be used. The example below represents the configuration for a default Active Directory configuration. Some Active Directory configurations may require searching against the Global Catalog on port 3268 instead of the usual port 389 . This is most likely when the Active Directory forest includes multiple domains. Example Configuration for the LdapExtended Login Module for a Default AD Configuration The example below implements a recursive role search within Active Directory. The key difference between this example and the default Active Directory example is that the role search has been replaced to search the member attribute using the DN of the user. The login module then uses the DN of the role to find groups of which the group is a member. Example Configuration for the LdapExtended Login Module for a Default AD Configuration with Recursive Search 3.2. Configure a Security Domain to use a Database Similar to LDAP, security domains can be configured to use a database for authentication and authorization by using a login module. 3.2.1. Database Login Module The Database login module is a JDBC login module that supports authentication and role mapping. This login module is used if username, password, and role information are stored in a relational database. This works by providing a reference to logical tables containing principals and roles in the expected format. For example: Table Principals(PrincipalID text, Password text) Table Roles(PrincipalID text, Role text, RoleGroup text) The Principals table associates the user PrincipalID with the valid password, and the Roles table associates the user PrincipalID with its role sets. The roles used for user permissions must be contained in rows with a RoleGroup column value of Roles . The tables are logical in that users can specify the SQL query that the login module uses. The only requirement is that the java.sql.ResultSet has the same logical structure as the Principals and Roles tables described previously. The actual names of the tables and columns are not relevant as the results are accessed based on the column index. To clarify this concept, consider a database with two tables, Principals and Roles , as already declared. The following statements populate the tables with the following data: PrincipalID java with a password of echoman in the Principals table PrincipalID java with a role named Echo in the RolesRoleGroup in the Roles table PrincipalID java with a role named caller-java in the CallerPrincipalRoleGroup in the Roles table For a full list of configuration options for the Database login module, see the Database login module section in the JBoss EAP Login Module Reference . 3.2.1.1. Configure a Security Domain to use the Database Login Module Before configuring a security domain to use the Database login module, a datasource must be properly configured. For more information on creating and configuring datasources in JBoss EAP, see the Datasource Management section of the JBoss EAP Configuration Guide . Once a datasource has been properly configured, a security domain can be configured to use the Database login module. The below example assumes a datasource named MyDatabaseDS has been created and properly configured with a database that is constructed with the following: CREATE TABLE Users(username VARCHAR(64) PRIMARY KEY, passwd VARCHAR(64)) CREATE TABLE UserRoles(username VARCHAR(64), role VARCHAR(32)) CLI Commands for Adding the Database Login Module /subsystem=security/security-domain=testDB:add /subsystem=security/security-domain=testDB/authentication=classic:add /subsystem=security/security-domain=testDB/authentication=classic/login-module=Database:add(code=Database,flag=required,module-options=[("dsJndiName"=>"java:/MyDatabaseDS"),("principalsQuery"=>"select passwd from Users where username=?"),("rolesQuery"=>"select role, 'Roles' from UserRoles where username=?")]) reload 3.3. Configure a Security Domain to use a Properties File Security domains can also be configured to use a filesystem as an identity store for authentication and authorization by using a login module. 3.3.1. UsersRoles Login Module UsersRoles is a simple login module that supports multiple users and user roles loaded from Java properties files. The primary purpose of this login module is to easily test the security settings of multiple users and roles using properties files deployed with the application. The default username-to-password mapping filename is users.properties and the default username-to-roles mapping filename is roles.properties . Note This login module supports password stacking, password hashing, and unauthenticated identity. The properties files are loaded during initialization using the initialize method thread context class loader. This means that these files can be placed on the classpath of the Jakarta EE deployment (for example, into the WEB-INF/classes folder in the WAR archive), or into any directory on the server classpath. For a full list of configuration options for the UsersRoles login module, see the UsersRoles login module section in the JBoss EAP Login Module Reference . 3.3.1.1. Configure a Security Domain to use the UsersRoles Login Module The below example assumes the following files have been created and are available on the application's classpath: sampleapp-users.properties sampleapp-roles.properties CLI Commands for Adding the UserRoles Login Module /subsystem=security/security-domain=sampleapp:add /subsystem=security/security-domain=sampleapp/authentication=classic:add /subsystem=security/security-domain=sampleapp/authentication=classic/login-module=UsersRoles:add(code=UsersRoles,flag=required,module-options=[("usersProperties"=>"sampleapp-users.properties"),("rolesProperties"=>"sampleapp-roles.properties")]) reload 3.4. Configure a Security Domain to use Certificate-Based Authentication JBoss EAP provides you with the ability to use certificate-based authentication with security domains to secure web applications or Jakarta Enterprise Beans. Important Before you can configure certificate-based authentication, you need to have Two-Way SSL/TLS for Applications enabled and configured, which requires X509 certificates configured for both the JBoss EAP instance as well as any clients accessing the web application or Jakarta Enterprise Beans secured by the security domain. Once the certificates, truststores, and two-way SSL/TLS are configured, you then can proceed with configuring a security domain that uses certificate-based authentication, configuring an application to use that security domain, and configuring your client to use the client certificate. 3.4.1. Creating a Security Domain with Certificate-Based Authentication To create a security domain that uses certificate-based authentication, you need to specify a truststore as well as a Certificate login module or one of its subclasses. The truststore must contain any trusted client certificates used for authentication, or it must contain the certificate of the certificate authority used to sign the client's certificate. The login module is used to authenticate the certificate presented by the client using the configured truststore. The security domain as a whole also must provide a way to map a role to the principal once it is authenticated. The Certificate login module itself will not map any role information to the principal, but it may be combined with another login module to do so. Alternatively, two subclasses of the Certificate login module, CertificateRoles and DatabaseCertificate , do provide a way to map roles to a principal after it is authenticated. The below example shows how to configure a security domain with certificate-based authentication using the CertificateRoles login module. Warning When performing authentication, the security domain will use the same certificate presented by the client when establishing two-way SSL/TLS. As a result, the client must use the same certificate for BOTH two-way SSL/TLS and the certificate-based authentication with the application or Jakarta Enterprise Beans. Example Security Domain with Certificate-Based Authentication Note The above example uses the CertificateRoles login module to handle authentication and map roles to authenticated principals. It does so by referencing a properties file using the rolesProperties attribute. This file lists usernames and roles using the following format: Since usernames are presented as the DN from the provided certificate, for example CN=valid-client, OU=JBoss, O=Red Hat, L=Raleigh, ST=NC, C=US , you have to escape special characters such as = and spaces when using a properties file: Example Roles Properties File To view, the DN of certificate: 3.4.2. Configure an Application to use a Security Domain with Certificate-Based Authentication Similar to configuring an application to use a security domain with other forms of authentication, you need to configure both the jboss-web.xml and web.xml files appropriately. For jboss-web.xml , add a reference to the security domain you configured for certificate-based authentication. Example jboss-web.xml <jboss-web> <security-domain>cert-roles-domain</security-domain> </jboss-web> For web.xml , set the <auth-method> attribute in <login-config> to CLIENT-CERT . You also need define <security-constraint> as well as <security-roles> . Example web.xml <web-app> <!-- URL for secured portion of application--> <security-constraint> <web-resource-collection> <web-resource-name>secure</web-resource-name> <url-pattern>/secure/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>All</role-name> </auth-constraint> </security-constraint> <!-- Security roles referenced by this web application --> <security-role> <description>The role that is required to log in to the application</description> <role-name>All</role-name> </security-role> <login-config> <auth-method>CLIENT-CERT</auth-method> <realm-name>cert-roles-domain</realm-name> </login-config> </web-app> 3.4.3. Configure the Client For a client to authenticate against an application secured with certificate-based authentication, the client needs access to a client certificate that is contained in the JBoss EAP instance's truststore. For example, if accessing the application using a browser, the client will need to import the trusted certificate into the browser's truststore. 3.5. Configure Caching for a Security Domain You can specify a cache for a security domain to speed up authentication checks. By default, a security domain uses a simple map as the cache. This default cache is a Least Recently Used (LRU) cache with a maximum of 1000 entries. Alternatively, you can set a security domain to use an Infinispan cache, or disable caching altogether. 3.5.1. Setting the Cache Type for a Security Domain Prerequisites If you are configuring a security domain to use an Infinispan cache, you must first create an Infinispan cache container named security that contains a default cache that the security domain will use. Important You can only define one Infinispan cache configuration for use with security domains. Although you can have multiple security domains that use an Infinispan cache, each security domain creates its own cache instance from the one Infinispan cache configuration. See the JBoss EAP Configuration Guide for more information on creating a cache container . You can use either the management console or management CLI to set a security domain's cache type. To use the management console: Navigate to Configuration Subsystems Security (Legacy) . Select the security domain from the list and click View . Click Edit , and for the Cache Type field, select either default or infinspan . Click Save . To use the management CLI, use the following command: For example, to set the other security domain to use an Infinispan cache: 3.5.2. Listing and Flushing Principals Listing Principals in the Cache You can see the principals that are stored in a security domain's cache using the following management CLI command: Flushing Principals from the Cache If required, you can flush principals from a security domain's cache. To flush a specific principal, use the following management CLI command: To flush all principals from the cache, use the following management CLI command: 3.5.3. Disabling Caching for a Security Domain You can use either the management console or management CLI to disable caching for a security domain. To use the management console: Navigate to Configuration Subsystems Security (Legacy) . Select the security domain from the list and click View . Click Edit and select the blank value for the Cache Type . Click Save . To use the management CLI, use the following command:
[ "dn: uid=jduke,ou=Users,dc=jboss,dc=org objectClass: inetOrgPerson objectClass: person objectClass: top cn: Java Duke sn: duke uid: jduke userPassword: theduke ============================= dn: uid=hnelson,ou=Users,dc=jboss,dc=org objectClass: inetOrgPerson objectClass: person objectClass: top cn: Horatio Nelson sn: Nelson uid: hnelson userPassword: secret ============================= dn: ou=groups,dc=jboss,dc=org objectClass: top objectClass: organizationalUnit ou: groups ============================= dn: uid=ldap,ou=Users,dc=jboss,dc=org objectClass: inetOrgPerson objectClass: person objectClass: top cn: LDAP sn: Service uid: ldap userPassword: randall ============================= dn: ou=Users,dc=jboss,dc=org objectClass: top objectClass: organizationalUnit ou: Users ============================= dn: dc=jboss,dc=org objectclass: top objectclass: domain dc: jboss ============================= dn: uid=GroupTwo,ou=groups,dc=jboss,dc=org objectClass: top objectClass: groupOfNames objectClass: uidObject cn: GroupTwo member: uid=jduke,ou=Users,dc=jboss,dc=org uid: GroupTwo ============================= dn: uid=GroupThree,ou=groups,dc=jboss,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: GroupThree uid: GroupThree uniqueMember: uid=GroupOne,ou=groups,dc=jboss,dc=org ============================= dn: uid=HTTP,ou=Users,dc=jboss,dc=org objectClass: inetOrgPerson objectClass: person objectClass: top cn: HTTP sn: Service uid: HTTP userPassword: httppwd ============================= dn: uid=GroupOne,ou=groups,dc=jboss,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: GroupOne uid: GroupOne uniqueMember: uid=jduke,ou=Users,dc=jboss,dc=org uniqueMember: uid=hnelson,ou=Users,dc=jboss,dc=org", "/subsystem=security/security-domain=testLdapExtendedExample:add(cache-type=default) /subsystem=security/security-domain=testLdapExtendedExample/authentication=classic:add /subsystem=security/security-domain=testLdapExtendedExample/authentication=classic/login-module=LdapExtended:add(code=LdapExtended, flag=required, module-options=[ (\"java.naming.factory.initial\"=>\"com.sun.jndi.ldap.LdapCtxFactory\"), (\"java.naming.provider.url\"=>\"ldap://localhost:10389\"), (\"java.naming.security.authentication\"=>\"simple\"), (\"bindDN\"=>\"uid=ldap,ou=Users,dc=jboss,dc=org\"), (\"bindCredential\"=>\"randall\"), (\"baseCtxDN\"=>\"ou=Users,dc=jboss,dc=org\"), (\"baseFilter\"=>\"(uid={0})\"), (\"rolesCtxDN\"=>\"ou=groups,dc=jboss,dc=org\"), (\"roleFilter\"=>\"(uniqueMember={1})\"), (\"roleAttributeID\"=>\"uid\")]) reload", "/subsystem=security/security-domain=AD_Default:add(cache-type=default) /subsystem=security/security-domain=AD_Default/authentication=classic:add /subsystem=security/security-domain=AD_Default/authentication=classic/login-module=LdapExtended:add(code=LdapExtended,flag=required,module-options=[ (\"java.naming.provider.url\"=>\"ldap://ldaphost.jboss.org\"),(\"bindDN\"=>\"JBOSSsearchuser\"), (\"bindCredential\"=>\"password\"), (\"baseCtxDN\"=>\"CN=Users,DC=jboss,DC=org\"), (\"baseFilter\"=>\"(sAMAccountName={0})\"), (\"rolesCtxDN\"=>\"CN=Users,DC=jboss,DC=org\"), (\"roleFilter\"=>\"(sAMAccountName={0})\"), (\"roleAttributeID\"=>\"memberOf\"), (\"roleAttributeIsDN\"=>\"true\"), (\"roleNameAttributeID\"=>\"cn\"), (\"searchScope\"=>\"ONELEVEL_SCOPE\"), (\"allowEmptyPasswords\"=>\"false\")]) reload", "/subsystem=security/security-domain=AD_Recursive:add(cache-type=default) /subsystem=security/security-domain=AD_Recursive/authentication=classic:add /subsystem=security/security-domain=AD_Recursive/authentication=classic/login-module=LdapExtended:add(code=LdapExtended,flag=required,module-options=[(\"java.naming.provider.url\"=>\"ldap://ldaphost.jboss.org\"), (\"java.naming.referral\"=>\"follow\"), (\"bindDN\"=>\"JBOSSsearchuser\"), (\"bindCredential\"=>\"password\"), (\"baseCtxDN\"=>\"CN=Users,DC=jboss,DC=org\"), (\"baseFilter\"=>\"(sAMAccountName={0})\"), (\"rolesCtxDN\"=>\"CN=Users,DC=jboss,DC=org\"), (\"roleFilter\"=>\"(member={1})\"), (\"roleAttributeID\"=>\"cn\"), (\"roleAttributeIsDN\"=>\"false\"), (\"roleRecursion\"=>\"2\"), (\"searchScope\"=>\"ONELEVEL_SCOPE\"), (\"allowEmptyPasswords\"=>\"false\")]) reload", "Table Principals(PrincipalID text, Password text) Table Roles(PrincipalID text, Role text, RoleGroup text)", "CREATE TABLE Users(username VARCHAR(64) PRIMARY KEY, passwd VARCHAR(64)) CREATE TABLE UserRoles(username VARCHAR(64), role VARCHAR(32))", "/subsystem=security/security-domain=testDB:add /subsystem=security/security-domain=testDB/authentication=classic:add /subsystem=security/security-domain=testDB/authentication=classic/login-module=Database:add(code=Database,flag=required,module-options=[(\"dsJndiName\"=>\"java:/MyDatabaseDS\"),(\"principalsQuery\"=>\"select passwd from Users where username=?\"),(\"rolesQuery\"=>\"select role, 'Roles' from UserRoles where username=?\")]) reload", "/subsystem=security/security-domain=sampleapp:add /subsystem=security/security-domain=sampleapp/authentication=classic:add /subsystem=security/security-domain=sampleapp/authentication=classic/login-module=UsersRoles:add(code=UsersRoles,flag=required,module-options=[(\"usersProperties\"=>\"sampleapp-users.properties\"),(\"rolesProperties\"=>\"sampleapp-roles.properties\")]) reload", "/subsystem=security/security-domain=cert-roles-domain:add /subsystem=security/security-domain=cert-roles-domain/jsse=classic:add(truststore={password=secret, url=\"/path/to/server.truststore.jks\"}, keystore={password=secret, url=\"/path/to/server.keystore.jks\"}, client-auth=true) /subsystem=security/security-domain=cert-roles-domain/authentication=classic:add /subsystem=security/security-domain=cert-roles-domain/authentication=classic/login-module=CertificateRoles:add(code=CertificateRoles, flag=required, module-options=[ securityDomain=\"cert-roles-domain\", rolesProperties=\"USD{jboss.server.config.dir}/cert-roles.properties\",password-stacking=\"useFirstPass\", verifier=\"org.jboss.security.auth.certs.AnyCertVerifier\"])", "user1=roleA user2=roleB,roleC user3=", "CN\\=valid-client,\\ OU\\=JBoss,\\ O\\=Red\\ Hat,\\ L\\=Raleigh,\\ ST\\=NC,\\ C\\=US=Admin", "keytool -printcert -file valid-client.crt Owner: CN=valid-client, OU=JBoss, O=Red Hat, L=Raleigh, ST=NC, C=US", "<jboss-web> <security-domain>cert-roles-domain</security-domain> </jboss-web>", "<web-app> <!-- URL for secured portion of application--> <security-constraint> <web-resource-collection> <web-resource-name>secure</web-resource-name> <url-pattern>/secure/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>All</role-name> </auth-constraint> </security-constraint> <!-- Security roles referenced by this web application --> <security-role> <description>The role that is required to log in to the application</description> <role-name>All</role-name> </security-role> <login-config> <auth-method>CLIENT-CERT</auth-method> <realm-name>cert-roles-domain</realm-name> </login-config> </web-app>", "/subsystem=security/security-domain= SECURITY_DOMAIN_NAME :write-attribute(name=cache-type,value= CACHE_TYPE )", "/subsystem=security/security-domain=other:write-attribute(name=cache-type,value=infinispan)", "/subsystem=security/security-domain= SECURITY_DOMAIN_NAME :list-cached-principals", "/subsystem=security/security-domain= SECURITY_DOMAIN_NAME :flush-cache(principal= USERNAME )", "/subsystem=security/security-domain= SECURITY_DOMAIN_NAME :flush-cache", "/subsystem=security/security-domain= SECURITY_DOMAIN_NAME :undefine-attribute(name=cache-type)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_identity_management/legacy_security_subsystem
Chapter 1. Ceph administration
Chapter 1. Ceph administration A Red Hat Ceph Storage cluster is the foundation for all Ceph deployments. After deploying a Red Hat Ceph Storage cluster, there are administrative operations for keeping a Red Hat Ceph Storage cluster healthy and performing optimally. The Red Hat Ceph Storage Administration Guide helps storage administrators to perform such tasks as: How do I check the health of my Red Hat Ceph Storage cluster? How do I start and stop the Red Hat Ceph Storage cluster services? How do I add or remove an OSD from a running Red Hat Ceph Storage cluster? How do I manage user authentication and access controls to the objects stored in a Red Hat Ceph Storage cluster? I want to understand how to use overrides with a Red Hat Ceph Storage cluster. I want to monitor the performance of the Red Hat Ceph Storage cluster. A basic Ceph storage cluster consist of two types of daemons: A Ceph Object Storage Device (OSD) stores data as objects within placement groups assigned to the OSD A Ceph Monitor maintains a master copy of the cluster map A production system will have three or more Ceph Monitors for high availability and typically a minimum of 50 OSDs for acceptable load balancing, data re-balancing and data recovery.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/ceph-administration
Chapter 14. Monitoring and Logging
Chapter 14. Monitoring and Logging Log management is an important component of monitoring the security status of your OpenStack deployment. Logs provide insight into the BAU actions of administrators, projects, and instances, in addition to the component activities that comprise your OpenStack deployment. Logs are not only valuable for proactive security and continuous compliance activities, but they are also a valuable information source for investigation and incident response. For example, analyzing the keystone access logs could alert you to failed logins, their frequency, origin IP, and whether the events are restricted to select accounts, among other pertinent information. The director includes intrusion detection capabilities using AIDE, and CADF auditing for keystone. For more information, see Hardening infrastructure and virtualization . 14.1. Harden the monitoring infrastructure Centralized logging systems are a high value target for intruders, as a successful breach could allow them to erase or tamper with the record of events. It is recommended you harden the monitoring platform with this in mind. In addition, consider making regular backups of these systems, with failover planning in the event of an outage or DoS. 14.2. Example events to monitor Event monitoring is a more proactive approach to securing an environment, providing real-time detection and response. Multiple tools exist which can aid in monitoring. For an OpenStack deployment, you will need to monitor the hardware, the OpenStack services, and the cloud resource usage. This section describes some example events you might need to be aware of. Important This list is not exhaustive. You will need to consider additional use cases that might apply to your specific network, and that you might consider anomalous behavior. Detecting the absence of log generation is an event of high value. Such a gap might indicate a service failure, or even an intruder who has temporarily switched off logging or modified the log level to hide their tracks. Application events, such as start or stop events, that were unscheduled might have possible security implications. Operating system events on the OpenStack nodes, such as user logins or restarts. These can provide valuable insight into distinguishing between proper and improper usage of systems. Networking bridges going down. This would be an actionable event due to the risk of service outage. IPtables flushing events on Compute nodes, and the resulting loss of access to instances. To reduce security risks from orphaned instances on a user, project, or domain deletion in the Identity service there is discussion to generate notifications in the system and have OpenStack components respond to these events as appropriate such as terminating instances, disconnecting attached volumes, reclaiming CPU and storage resources and so on. Security monitoring controls such as intrusion detection software, antivirus software, and spyware detection and removal utilities can generate logs that show when and how an attack or intrusion took place. These tools can provide a layer of protection when deployed on the OpenStack nodes. Project users might also want to run such tools on their instances.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/security_and_hardening_guide/monitoring_and_logging
3.2. Using CPUfreq Governors
3.2. Using CPUfreq Governors One of the most effective ways to reduce power consumption and heat output on your system is to use CPUfreq . CPUfreq - also referred to as CPU speed scaling - allows the clock speed of the processor to be adjusted on the fly. This enables the system to run at a reduced clock speed to save power. The rules for shifting frequencies, whether to a faster or slower clock speed, and when to shift frequencies, are defined by the CPUfreq governor. The governor defines the power characteristics of the system CPU, which in turn affects CPU performance. Each governor has its own unique behavior, purpose, and suitability in terms of workload. This section describes how to choose and configure a CPUfreq governor, the characteristics of each governor, and what kind of workload each governor is suitable for. 3.2.1. CPUfreq Governor Types This section lists and describes the different types of CPUfreq governors available in Red Hat Enterprise Linux 6. cpufreq_performance The Performance governor forces the CPU to use the highest possible clock frequency. This frequency will be statically set, and will not change. As such, this particular governor offers no power saving benefit . It is only suitable for hours of heavy workload, and even then only during times wherein the CPU is rarely (or never) idle. cpufreq_powersave By contrast, the Powersave governor forces the CPU to use the lowest possible clock frequency. This frequency will be statically set, and will not change. As such, this particular governor offers maximum power savings, but at the cost of the lowest CPU performance . The term powersave can sometimes be deceiving, though, since (in principle) a slow CPU on full load consumes more power than a fast CPU that is not loaded. As such, while it may be advisable to set the CPU to use the Powersave governor during times of expected low activity, any unexpected high loads during that time can cause the system to actually consume more power. The Powersave governor is, in simple terms, more of a speed limiter for the CPU than a power saver . It is most useful in systems and environments where overheating can be a problem. cpufreq_ondemand The Ondemand governor is a dynamic governor that allows the CPU to achieve maximum clock frequency when system load is high, and also minimum clock frequency when the system is idle. While this allows the system to adjust power consumption accordingly with respect to system load, it does so at the expense of latency between frequency switching . As such, latency can offset any performance versus power saving benefits offered by the Ondemand governor if the system switches between idle and heavy workloads too often. For most systems, the Ondemand governor can provide the best compromise between heat emission, power consumption, performance, and manageability. When the system is only busy at specific times of the day, the Ondemand governor will automatically switch between maximum and minimum frequency depending on the load without any further intervention. cpufreq_userspace The Userspace governor allows userspace programs (or any process running as root) to set the frequency. This governor is normally used along with the cpuspeed daemon. Of all the governors, Userspace is the most customizable; and depending on how it is configured, it can offer the best balance between performance and consumption for your system. cpufreq_conservative Like the Ondemand governor, the Conservative governor also adjusts the clock frequency according to usage (like the Ondemand governor). However, while the Ondemand governor does so in a more aggressive manner (that is from maximum to minimum and back), the Conservative governor switches between frequencies more gradually. This means that the Conservative governor will adjust to a clock frequency that it deems fitting for the load, rather than simply choosing between maximum and minimum. While this can possibly provide significant savings in power consumption, it does so at an ever greater latency than the Ondemand governor. Note You can enable a governor using cron jobs. This allows you to automatically set specific governors during specific times of the day. As such, you can specify a low-frequency governor during idle times (for example after work hours) and return to a higher-frequency governor during hours of heavy workload. For instructions on how to enable a specific governor, refer to Procedure 3.2, "Enabling a CPUfreq Governor" in Section 3.2.2, "CPUfreq Setup" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/cpufreq_governors
12.3.2. Starting a Service
12.3.2. Starting a Service To start a service, type the following at a shell prompt as root : service service_name start For example, to start the httpd service, type:
[ "~]# service httpd start Starting httpd: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s3-services-running-running
22.6. JBoss Operations Network Remote-Client Server Plugin
22.6. JBoss Operations Network Remote-Client Server Plugin 22.6.1. JBoss Operations Network Plugin Metrics Table 22.1. JBoss Operations Network Traits for the Cache Container (Cache Manager) Trait Name Display Name Description cache-manager-status Cache Container Status The current runtime status of a cache container. cluster-name Cluster Name The name of the cluster. members Cluster Members The names of the members of the cluster. coordinator-address Coordinator Address The coordinator node's address. local-address Local Address The local node's address. version Version The cache manager version. defined-cache-names Defined Cache Names The caches that have been defined for this manager. Table 22.2. JBoss Operations Network Metrics for the Cache Container (Cache Manager) Metric Name Display Name Description cluster-size Cluster Size How many members are in the cluster. defined-cache-count Defined Cache Count How many caches that have been defined for this manager. running-cache-count Running Cache Count How many caches are running under this manager. created-cache-count Created Cache Count How many caches have actually been created under this manager. Table 22.3. JBoss Operations Network Traits for the Cache Trait Name Display Name Description cache-status Cache Status The current runtime status of a cache. cache-name Cache Name The current name of the cache. version Version The cache version. Table 22.4. JBoss Operations Network Metrics for the Cache Metric Name Display Name Description cache-status Cache Status The current runtime status of a cache. number-of-locks-available [LockManager] Number of locks available The number of exclusive locks that are currently available. concurrency-level [LockManager] Concurrency level The LockManager's configured concurrency level. average-read-time [Statistics] Average read time Average number of milliseconds required for a read operation on the cache to complete. hit-ratio [Statistics] Hit ratio The result (in percentage) when the number of hits (successful attempts) is divided by the total number of attempts. elapsed-time [Statistics] Seconds since cache started The number of seconds since the cache started. read-write-ratio [Statistics] Read/write ratio The read/write ratio (in percentage) for the cache. average-write-time [Statistics] Average write time Average number of milliseconds a write operation on a cache requires to complete. hits [Statistics] Number of cache hits Number of cache hits. evictions [Statistics] Number of cache evictions Number of cache eviction operations. remove-misses [Statistics] Number of cache removal misses Number of cache removals where the key was not found. time-since-reset [Statistics] Seconds since cache statistics were reset Number of seconds since the last cache statistics reset. number-of-entries [Statistics] Number of current cache entries Number of entries currently in the cache. stores [Statistics] Number of cache puts Number of cache put operations remove-hits [Statistics] Number of cache removal hits Number of cache removal operation hits. misses [Statistics] Number of cache misses Number of cache misses. success-ratio [RpcManager] Successful replication ratio Successful replications as a ratio of total replications in numeric double format. replication-count [RpcManager] Number of successful replications Number of successful replications replication-failures [RpcManager] Number of failed replications Number of failed replications average-replication-time [RpcManager] Average time spent in the transport layer The average time (in milliseconds) spent in the transport layer. commits [Transactions] Commits Number of transaction commits performed since the last reset. prepares [Transactions] Prepares Number of transaction prepares performed since the last reset. rollbacks [Transactions] Rollbacks Number of transaction rollbacks performed since the last reset. invalidations [Invalidation] Number of invalidations Number of invalidations. passivations [Passivation] Number of cache passivations Number of passivation events. activations [Activations] Number of cache entries activated Number of activation events. cache-loader-loads [Activation] Number of cache store loads Number of entries loaded from the cache store. cache-loader-misses [Activation] Number of cache store misses Number of entries that did not exist in the cache store. cache-loader-stores [CacheStore] Number of cache store stores Number of entries stored in the cache stores. Note Gathering of some of these statistics is disabled by default. JBoss Operations Network Metrics for Connectors The metrics provided by the JBoss Operations Network (JON) plugin for Red Hat JBoss Data Grid are for REST and Hot Rod endpoints only. For the REST protocol, the data must be taken from the Web subsystem metrics. For details about each of these endpoints, see the Getting Started Guide . Table 22.5. JBoss Operations Network Metrics for the Connectors Metric Name Display Name Description bytesRead Bytes Read Number of bytes read. bytesWritten Bytes Written Number of bytes written. Note Gathering of these statistics is disabled by default. Report a bug 22.6.2. JBoss Operations Network Plugin Operations Table 22.6. JBoss ON Plugin Operations for the Cache Operation Name Description Start Cache Starts the cache. Stop Cache Stops the cache. Clear Cache Clears the cache contents. Reset Statistics Resets statistics gathered by the cache. Reset Activation Statistics Resets activation statistics gathered by the cache. Reset Invalidation Statistics Resets invalidations statistics gathered by the cache. Reset Passivation Statistics Resets passivation statistics gathered by the cache. Reset Rpc Statistics Resets replication statistics gathered by the cache. Remove Cache Removes the given cache from the cache-container. Record Known Global Keyset Records the global known keyset to a well-known key for retrieval by the upgrade process. Synchronize Data Synchronizes data from the old cluster to this using the specified migrator. Disconnect Source Disconnects the target cluster from the source cluster according to the specified migrator. JBoss Operations Network Plugin Operations for the Cache Backups The cache backups used for these operations are configured using cross-datacenter replication. In the JBoss Operations Network (JON) User Interface, each cache backup is the child of a cache. For more information about cross-datacenter replication, see Chapter 31, Set Up Cross-Datacenter Replication Table 22.7. JBoss Operations Network Plugin Operations for the Cache Backups Operation Name Description status Display the site status. bring-site-online Brings the site online. take-site-offline Takes the site offline. Cache (Transactions) Red Hat JBoss Data Grid does not support using Transactions in Remote Client-Server mode. As a result, none of the endpoints can use transactions. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 22.6.3. JBoss Operations Network Plugin Attributes Table 22.8. JBoss ON Plugin Attributes for the Cache (Transport) Attribute Name Type Description cluster string The name of the group communication cluster. executor string The executor used for the transport. lock-timeout long The timeout period for locks on the transport. The default value is 240000 . machine string A machine identifier for the transport. rack string A rack identifier for the transport. site string A site identifier for the transport. stack string The JGroups stack used for the transport. Report a bug 22.6.4. Create a New Cache Using JBoss Operations Network (JON) Use the following steps to create a new cache using JBoss Operations Network (JON) for Remote Client-Server mode. Procedure 22.3. Creating a new cache in Remote Client-Server mode Log into the JBoss Operations Network Console. From from the JBoss Operations Network console, click Inventory . Select Servers from the Resources list on the left of the console. Select the specific Red Hat JBoss Data Grid server from the servers list. Below the server name, click infinispan and then Cache Containers . Select the desired cache container that will be parent for the newly created cache. Right-click the selected cache container. For example, clustered . In the context menu, navigate to Create Child and select Cache . Create a new cache in the resource create wizard. Enter the new cache name and click . Set the cache attributes in the Deployment Options and click Finish . Note Refresh the view of caches in order to see newly added resource. It may take several minutes for the Resource to show up in the Inventory. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-jboss_operations_network_remote-client_server_plugin
Chapter 9. EgressService [k8s.ovn.org/v1]
Chapter 9. EgressService [k8s.ovn.org/v1] Description EgressService is a CRD that allows the user to request that the source IP of egress packets originating from all of the pods that are endpoints of the corresponding LoadBalancer Service would be its ingress IP. In addition, it allows the user to request that egress packets originating from all of the pods that are endpoints of the LoadBalancer service would use a different network than the main one. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object EgressServiceSpec defines the desired state of EgressService status object EgressServiceStatus defines the observed state of EgressService 9.1.1. .spec Description EgressServiceSpec defines the desired state of EgressService Type object Property Type Description network string The network which this service should send egress and corresponding ingress replies to. This is typically implemented as VRF mapping, representing a numeric id or string name of a routing table which by omission uses the default host routing. nodeSelector object Allows limiting the nodes that can be selected to handle the service's traffic when sourceIPBy=LoadBalancerIP. When present only a node whose labels match the specified selectors can be selected for handling the service's traffic. When it is not specified any node in the cluster can be chosen to manage the service's traffic. sourceIPBy string Determines the source IP of egress traffic originating from the pods backing the LoadBalancer Service. When LoadBalancerIP the source IP is set to its LoadBalancer ingress IP. When Network the source IP is set according to the interface of the Network, leveraging the masquerade rules that are already in place. Typically these rules specify SNAT to the IP of the outgoing interface, which means the packet will typically leave with the IP of the node. 9.1.2. .spec.nodeSelector Description Allows limiting the nodes that can be selected to handle the service's traffic when sourceIPBy=LoadBalancerIP. When present only a node whose labels match the specified selectors can be selected for handling the service's traffic. When it is not specified any node in the cluster can be chosen to manage the service's traffic. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 9.1.3. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 9.1.4. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 9.1.5. .status Description EgressServiceStatus defines the observed state of EgressService Type object Required host Property Type Description host string The name of the node selected to handle the service's traffic. In case sourceIPBy=Network the field will be set to "ALL". 9.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressservices GET : list objects of kind EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices DELETE : delete collection of EgressService GET : list objects of kind EgressService POST : create an EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name} DELETE : delete an EgressService GET : read the specified EgressService PATCH : partially update the specified EgressService PUT : replace the specified EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name}/status GET : read status of the specified EgressService PATCH : partially update status of the specified EgressService PUT : replace status of the specified EgressService 9.2.1. /apis/k8s.ovn.org/v1/egressservices HTTP method GET Description list objects of kind EgressService Table 9.1. HTTP responses HTTP code Reponse body 200 - OK EgressServiceList schema 401 - Unauthorized Empty 9.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices HTTP method DELETE Description delete collection of EgressService Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressService Table 9.3. HTTP responses HTTP code Reponse body 200 - OK EgressServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressService Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body EgressService schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 202 - Accepted EgressService schema 401 - Unauthorized Empty 9.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the EgressService HTTP method DELETE Description delete an EgressService Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressService Table 9.10. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressService Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressService Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body EgressService schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 401 - Unauthorized Empty 9.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name}/status Table 9.16. Global path parameters Parameter Type Description name string name of the EgressService HTTP method GET Description read status of the specified EgressService Table 9.17. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressService Table 9.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.19. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressService Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.21. Body parameters Parameter Type Description body EgressService schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/egressservice-k8s-ovn-org-v1
2.5. turbostat
2.5. turbostat Turbostat is provided by the kernel-tools package. It reports on processor topology, frequency, idle power-state statistics, temperature, and power usage on Intel (R) 64 processors. Turbostat is useful for identifying servers that are inefficient in terms of power usage or idle time. It also helps to identify the rate of system management interrupts (SMIs) occurring on the system. It can also be used to verify the effects of power management tuning. Turbostat requires root privileges to run. It also requires processor support for the following: invariant time stamp counters APERF model-specific registers MPERF model-specific registers For more details about turbostat output and how to read it, see Section A.10, "turbostat" . For more information about turbostat , see the man page:
[ "man turbostat" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-turbostat
Release Notes for Red Hat build of Apache Camel for Spring Boot 3.20
Release Notes for Red Hat build of Apache Camel for Spring Boot 3.20 Red Hat build of Apache Camel for Spring Boot 3.20 What's new in Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel for Spring Boot Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/release_notes_for_red_hat_build_of_apache_camel_for_spring_boot_3.20/index
4.9. Configuring Headless Virtual Machines
4.9. Configuring Headless Virtual Machines You can configure a headless virtual machine when it is not necessary to access the machine via a graphical console. This headless machine will run without graphical and video devices. This can be useful in situations where the host has limited resources, or to comply with virtual machine usage requirements such as real-time virtual machines. Headless virtual machines can be administered via a Serial Console, SSH, or any other service for command line access. Headless mode is applied via the Console tab when creating or editing virtual machines and machine pools, and when editing templates. It is also available when creating or editing instance types. If you are creating a new headless virtual machine, you can use the Run Once window to access the virtual machine via a graphical console for the first run only. See Virtual Machine Run Once settings explained for more details. Prerequisites If you are editing an existing virtual machine, and the Red Hat Virtualization guest agent has not been installed, note the machine's IP prior to selecting Headless Mode . Before running a virtual machine in headless mode, the GRUB configuration for this machine must be set to console mode otherwise the guest operating system's boot process will hang. To set console mode, comment out the spashimage flag in the GRUB menu configuration file: #splashimage=(hd0,0)/grub/splash.xpm.gz serial --unit=0 --speed=9600 --parity=no --stop=1 terminal --timeout=2 serial Note Restart the virtual machine if it is running when selecting the Headless Mode option. Configuring a Headless Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Console tab. Select Headless Mode . All other fields in the Graphical Console section are disabled. Optionally, select Enable VirtIO serial console to enable communicating with the virtual machine via serial console. This is highly recommended. Reboot the virtual machine if it is running. See Rebooting a Virtual Machine .
[ "#splashimage=(hd0,0)/grub/splash.xpm.gz serial --unit=0 --speed=9600 --parity=no --stop=1 terminal --timeout=2 serial" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/configuring_headless_machines
14.13.4. Displaying Information about the Virtual CPU Counts of a Domain
14.13.4. Displaying Information about the Virtual CPU Counts of a Domain virsh vcpucount requires a domain name or a domain ID. For example: The vcpucount can take the following options: --maximum displays the maximum number of vCPUs available --active displays the number of currently active vCPUs --live displays the value from the running domain --config displays the value to be configured on guest virtual machine's boot --current displays the value according to current domain state --guest displays the count that is returned is from the perspective of the guest
[ "virsh vcpucount rhel6 maximum config 2 maximum live 2 current config 2 current live 2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-displaying_per_guest_virtual_machine_information-displaying_information_about_the_virtual_cpu_counts_of_a_given_domian
Chapter 4. Updating the Overcloud
Chapter 4. Updating the Overcloud This process updates the overcloud. Prerequisites You have updated the undercloud to the latest version. 4.1. Running the overcloud update preparation The update requires running openstack overcloud update prepare command, which performs the following tasks: Updates the overcloud plan to OpenStack Platform 16.0 Prepares the nodes for the update Procedure Source the stackrc file: Run the update preparation command: Include the following options relevant to your environment: Custom configuration environment files ( -e ) If using your own custom roles, include your custom roles ( roles_data ) file ( -r ) If using custom networks, include your composable network ( network_data ) file ( -n ) If the name of your overcloud stack is different to the default name overcloud , include the --stack option in the update preparation command and replace <STACK_NAME> with the name of your stack. Wait until the update preparation completes. 4.2. Running the container image preparation The overcloud requires the latest OpenStack Platform 16.0 container images before performing the update. This involves executing the container_image_prepare external update process. To execute this process, run the openstack overcloud external-update run command against tasks tagged with the container_image_prepare tag. These tasks: Automatically prepare all container image configuration relevant to your environment. Pull the relevant container images to your undercloud, unless you have previously disabled this option. Procedure Source the stackrc file: Run the openstack overcloud external-update run command against tasks tagged with the container_image_prepare tag: 4.3. Updating all Controller nodes This process updates all the Controller nodes to the latest OpenStack Platform 16.0 version. The process involves running the openstack overcloud update run command and including the --limit Controller option to restrict operations to the Controller nodes only. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK_NAME option replacing STACK_NAME with the name of your stack. Procedure Source the stackrc file: Run the update command: Wait until the Controller node update completes. 4.4. Updating all Compute nodes This process updates all Compute nodes to the latest OpenStack Platform 16.0 version. The process involves running the openstack overcloud update run command and including the --nodes Compute option to restrict operations to the Compute nodes only. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK_NAME option replacing STACK_NAME with the name of your stack. Procedure Source the stackrc file: Run the update command: Wait until the Compute node update completes. 4.5. Updating all HCI Compute nodes This process updates the Hyperconverged Infrastructure (HCI) Compute nodes. The process involves: Running the openstack overcloud update run command and including the --nodes ComputeHCI option to restrict operations to the HCI nodes only. Running the openstack overcloud external-update run --tags ceph command to perform an update to a containerized Red Hat Ceph Storage 4 cluster. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK_NAME option replacing STACK_NAME with the name of your stack. Procedure Source the stackrc file: Run the update command: Wait until the node update completes. Run the Ceph Storage update command. For example: Wait until the Compute HCI node update completes. 4.6. Updating all Ceph Storage nodes This process updates the Ceph Storage nodes. The process involves: Running the openstack overcloud update run command and including the --nodes CephStorage option to restrict operations to the Ceph Storage nodes only. Running the openstack overcloud external-update run command to run ceph-ansible as an external process and update the Red Hat Ceph Storage 4 containers. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK_NAME option replacing STACK_NAME with the name of your stack. Procedure Source the stackrc file: Update group nodes. To update all nodes in a group: To update a single node in a group: Note Ensure that you update all nodes if you choose to update nodes individually. The index of the first node in a group is zero (0). For example, to update the first node in a group named CephStorage , the command is: openstack overcloud update run --nodes CephStorage[0] Wait until the node update completes. Run the Ceph Storage container update command: Wait until the Ceph Storage container update completes. 4.7. Performing online database updates Some overcloud components require an online upgrade (or migration) of their databases tables. This involves executing the online_upgrade external update process. To execute this process, run the openstack overcloud external-update run command against tasks tagged with the online_upgrade tag. This performs online database updates to the following components: OpenStack Block Storage (cinder) OpenStack Compute (nova) Procedure Source the stackrc file: Run the openstack overcloud external-update run command against tasks tagged with the online_upgrade tag: 4.8. Finalizing the update The update requires a final step to update the overcloud stack. This ensures the stack's resource structure aligns with a regular deployment of OpenStack Platform 16.0 and allows you to perform standard openstack overcloud deploy functions in the future. Procedure Source the stackrc file: Run the update finalization command: Include the following options relevant to your environment: + * Custom configuration environment files ( -e ) * If using your own custom roles, include your custom roles ( roles_data ) file ( -r ) * If using custom networks, include your composable network ( network_data ) file ( -n ) * If the name of your overcloud stack is different to the default name overcloud , include the --stack option in the update preparation command and replace <STACK_NAME> with the name of your stack. Wait until the update finalization completes.
[ "source ~/stackrc", "openstack overcloud update prepare --templates -r <ROLES DATA FILE> -n <NETWORK DATA FILE> -e <ENVIRONMENT FILE> -e <ENVIRONMENT FILE> --stack <STACK_NAME> ...", "source ~/stackrc", "openstack overcloud external-update run --stack STACK_NAME --tags container_image_prepare", "source ~/stackrc", "openstack overcloud update run --stack STACK_NAME --limit Controller --playbook all", "source ~/stackrc", "openstack overcloud update run --stack STACK_NAME --limit Compute --playbook all", "source ~/stackrc", "openstack overcloud update run --stack _STACK_NAME_ --limit ComputeHCI --playbook all", "openstack overcloud external-update run --stack _STACK_NAME_ --tags ceph", "source ~/stackrc", "openstack overcloud update run --nodes <GROUP_NAME>", "openstack overcloud update run --nodes <GROUP_NAME> [NODE_INDEX]", "openstack overcloud external-update run --stack _STACK_NAME_ --tags ceph", "source ~/stackrc", "openstack overcloud external-update run --stack STACK_NAME --tags online_upgrade", "source ~/stackrc", "openstack overcloud update converge --templates -r <ROLES DATA FILE> -n <NETWORK DATA FILE> -e <ENVIRONMENT FILE> -e <ENVIRONMENT FILE> --stack <STACK_NAME> ..." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/assembly-updating_the_overcloud
Chapter 4. Deploying Red Hat Quay
Chapter 4. Deploying Red Hat Quay After you have configured your Red Hat Quay deployment, you can deploy it using the following procedures. Prerequisites The Red Hat Quay database is running. The Redis server is running. 4.1. Creating the YAML configuration file Use the following procedure to deploy Red Hat Quay locally. Procedure Enter the following command to create a minimal config.yaml file that is used to deploy the Red Hat Quay container: USD touch config.yaml Copy and paste the following YAML configuration into the config.yaml file: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 CREATE_NAMESPACE_ON_PUSH: true DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default FEATURE_MAILING: false SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 Create a directory to copy the Red Hat Quay configuration bundle to: USD mkdir USDQUAY/config Copy the Red Hat Quay configuration file to the directory: USD cp -v config.yaml USDQUAY/config 4.1.1. Configuring a Red Hat Quay superuser You can optionally add a superuser by editing the config.yaml file to add the necessary configuration fields. The list of superuser accounts is stored as an array in the field SUPER_USERS . Superusers have the following capabilities: User management Organization management Service key management Change log transparency Usage log management Globally-visible user message creation Procedure Add the SUPER_USERS array to the config.yaml file: SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin 1 ... 1 If following this guide, use quayadmin . 4.2. Prepare local storage for image data Use the following procedure to set your local file system to store registry images. Procedure Create a local directory that will store registry images by entering the following command: USD mkdir USDQUAY/storage Set the directory to store registry images: USD setfacl -m u:1001:-wx USDQUAY/storage 4.3. Deploy the Red Hat Quay registry Use the following procedure to deploy the Quay registry container. Procedure Enter the following command to start the Quay registry container, specifying the appropriate volumes for configuration data and local storage for image data:
[ "touch config.yaml", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 CREATE_NAMESPACE_ON_PUSH: true DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default FEATURE_MAILING: false SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379", "mkdir USDQUAY/config", "cp -v config.yaml USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin 1", "mkdir USDQUAY/storage", "setfacl -m u:1001:-wx USDQUAY/storage", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/proof_of_concept_-_deploying_red_hat_quay/poc-deploying-quay
8.210. sapconf
8.210. sapconf 8.210.1. RHBA-2014:1424 - sapconf bug fix update Updated sapconf package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The sapconf package contains a script that checks the basic installation of Red Hat Enterprise Linux and modifies it according to SAP requirements. The script ensures that all necessary packages are installed and that configuration parameters are set correctly to run SAP software. Bug Fixes BZ# 1024356 In certain situations, the sapconf script created the /etc/hosts file in an incorrect format, thus the file did not return the fully qualified domain name (FQDN). As a consequence, SAP installation failed unexpectedly. With this update, the script has been edited to handle situations when the incorrect /etc/hosts file was created. In addition, when FQDN is not returned properly, the script exits with the error code 4. BZ# 1025187 Previously, the manual page for the sapconf script was missing. This update adds the missing manual page to the sapconf package. BZ# 1040617 Due to a bug in the underlying source code, the sapconf script was unable to verify if a virtual guest was a VMware guest. Consequently, an attempt to use sapconf on such a guest failed. With this update, sapconf recognizes the VMware guests as expected. BZ# 1051017 The order of reading limit files in Red Hat Enterprise Linux is as follows: First is read the /etc/security/limits.conf file and then the limit files located in the /etc/security/limits.d/ directory according to the C localization functions. When an entry is specified in two or more files, the entry that is read last takes effect. Previously, the sapconf script wrote entries to /etc/security/limits.conf, which could cause the sapconf entries to be overwritten by the same entries in files located in /etc/security/limits.d/. With this update, a separate sapconf limit file has been added to /etc/security/limits.d/ to anticipate this bug. BZ# 1083651 When the sapconf script updated the /etc/hosts file, permissions of that file were incorrectly modified. This update applies a patch to fix this bug, and sapconf no longer overwrites the file's permissions. Users of sapconf are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/sapconf
4.9. Red Hat Virtualization 4.3 Batch Update 8 (ovirt-4.3.11)
4.9. Red Hat Virtualization 4.3 Batch Update 8 (ovirt-4.3.11) 4.9.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1830762 Previously, importing an OVA as a Template using the ovirt-engine-sdk's uploading script failed with a null pointer exception because some storage-related values were not set correctly. The current release fixes this issue. It adds code that checks the storage values and, if needed, sets them using values from the image object. Now, importing the OVA this way succeeds. BZ# 1842377 Previously, while creating virtual machine snapshots, if the VDSM's command to freeze a virtual machines' file systems exceeded the snapshot command's 3-minute timeout period, creating snapshots failed, causing virtual machines and disks to lock. The current release adds two key-value pairs to the engine configuration. You can configure these using the engine-config tool: Setting LiveSnapshotPerformFreezeInEngine to true enables the Manager to freeze VMs' file systems before it creates a snapshot of them. Setting LiveSnapshotAllowInconsistent to true enables the Manager to continue creating snapshots if it fails to freeze VMs' file systems. BZ# 1845747 Previously, after upgrading to 4.3 and updating the cluster, the virtual machine (VM) tab in the Administration Portal was extremely slow until you restarted the VMs. This issue happened because updating the page recalculated the list of changed fields for every VM on the VM list page (read from the snapshot). The current release fixes this issue. It eliminates the performance impact by calculating the changed fields only once when the run snapshot is created. BZ# 1848877 In versions, engine-backup --mode=verify passed even if pg_restore emitted errors. The current release fixes this issue. The engine-backup --mode=verify command correctly fails if pg_restore emits errors. BZ# 1849558 Previously, when creating large numbers of virtual machines (~2300 VMs), the associated Data Centers became unresponsive, and the hosts did not have Storage Pool Managers (SPM). With this release, large scale deployment of virtual machines succeeds without errors. BZ# 1850920 Previously, creating a live snapshot with memory while LiveSnapshotPerformFreezeInEngine was set to True resulted in a virtual machine file system that is frozen when previewing or committing the snapshot with memory restore. In this release, the virtual machine runs successfully after creating a preview snapshot from a memory snapshot. BZ# 1850963 Previously, installing guest agents for Windows guests from rhv-guest-tools-iso-4.3-12.el7ev using rhev-apt.exe failed because it could not verify a filename that exceeded Windows' 63-character limit. The current release fixes this issue. It renames the file with a shorter name, so the installation process works. BZ# 1851854 Previously, the slot parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot parameter is parsed as an int, so disk rollback and virtual machine creation succeed. BZ# 1852314 Previously, exporting a virtual machine or template to an OVA file incorrectly sets its format in the OVF metadata file to "RAW". This issue causes problems using the OVA file. The current release fixes this issue. Exporting to OVA sets the format in the OVF metadata file to "COW", which represents the disk's actual format, qcow2. BZ# 1852315 Host capabilities retrieval was failing for certain non-NUMA CPU topologies. That has been fixed and host capabilities should be correctly reported now for those topologies. BZ# 1855213 After updating the packages on a conversion host, it stopped displaying conversion progress. The current release fixes this issue. The conversion host reports its progress performing conversions, including on the latest RHEL 7-based hosts. BZ# 1855283 Previously, during RHHI-V deployment of three hosts using ovirt-ansible-hosted-engine-setup, the host for the Self-Hosted Engine was added to the default cluster, but the remaining two hosts were not added. In this release, deployment with ovirt-ansible-hosted-engine-setup adds all hosts to the cluster. 4.9.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1698870 The Red Hat knowledge base article, "Monitoring RHV with a standalone Elasticsearch instance," has been updated, and is available at https://access.redhat.com/articles/4921101 BZ# 1842522 With this enhancement, while deploying RHEL 7-based hosts, you can configure SPICE encryption so that: - Only TLSv1.2 and newer protocols are enabled - Available ciphers are limited as described in BZ1563271 To apply this enhancement to existing hosts, an administrator puts each host into Maintenance mode, performs a Reinstall, and activates each host. For details, search for "Reinstalling Hosts" in the documentation. 4.9.3. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1765993 When using the REST API to restore a virtual machine snapshot, the request returns before the snapshot commit has finished. Workaround: Confirm that the commit operation is complete before attempting to restart the virtual machine. 4.9.4. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 1745324 The rhv-guest-tools-iso package is deprecated and will be replaced in Red Hat Virtualization 4.4 by virtio-win-guest-tools iso image. For information, see How can I install RHV Windows guest tools in RHEV 4.4? . BZ# 1862793 Red Hat Virtualization (RHV) 4.3 with Metrics Store installations that are upgraded to RHV 4.4 are not supported. Upgrading an existing RHV 4.3 Metrics Store to RHV 4.4 may continue to work, but has not been tested and is not supported. Administrators are encouraged to use the new Grafana dashboards with Data Warehouse. Administrators can also send metrics and logs to a standalone Elasticsearch instance. For more information, see https://access.redhat.com/solutions/5161761
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/red_hat_virtualization_4_3_batch_update_8_ovirt_4_3_11
Chapter 16. Overview of migration to Red Hat build of Kogito microservices
Chapter 16. Overview of migration to Red Hat build of Kogito microservices You can migrate the decision service artifacts that you developed in Business Central to Red Hat build of Kogito microservices. Red Hat build of Kogito currently supports migration for the following types of decision services: Decision Model and Notation (DMN) models : You migrate DMN-based decision services by moving the DMN resources from KJAR artifacts to the respective Red Hat build of Kogito archetype. Predictive Model Markup Language (PMML) models : You migrate PMML-based prediction and prediction services by moving the PMML resources from KJAR artifacts to the respective Red Hat build of Kogito archetype. Drools Rule Language (DRL) rules : You migrate the DRL-based decision services by enclosing them in a Red Hat build of Quarkus REST endpoint. This approach of migration enables you to use major Quarkus features, such as hot reload and native compilation. The Quarkus features and the programming model of Red Hat build of Kogito enable the automatic generation of the Quarkus REST endpoints for implementation in your applications and services.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-migration-to-kogito-overview_migration-kogito-microservices
Chapter 3. Creating and executing DMN and BPMN models using Maven
Chapter 3. Creating and executing DMN and BPMN models using Maven You can use Maven archetypes to develop DMN and BPMN models in VS Code using the Red Hat Process Automation Manager VS Code extension instead of Business Central. You can then integrate your archetypes with your Red Hat Process Automation Manager decision and process services in Business Central as needed. This method of developing DMN and BPMN models is helpful for building new business applications using the Red Hat Process Automation Manager VS Code extension. Procedure In a command terminal, navigate to a local folder where you want to store the new Red Hat Process Automation Manager project. Enter the following command to use a Maven archtype to generate a project within a defined folder: Generating a project using Maven archetype This command generates a Maven project with required dependencies and generates required directories and files to build your business application. You can use the Git version control system (recommended) when developing a project. If you want to generate multiple projects in the same directory, specify the artifactId and groupId of the generated business application by adding -DgroupId=<groupid> -DartifactId=<artifactId> to the command. In your VS Code IDE, click File , select Open Folder , and navigate to the folder that is generated using the command. Before creating the first asset, set a package for your business application, for example, org.kie.businessapp , and create respective directories in the following paths: PROJECT_HOME/src/main/java PROJECT_HOME/src/main/resources PROJECT_HOME/src/test/resources For example, you can create PROJECT_HOME/src/main/java/org/kie/businessapp for org.kie.businessapp package. Use VS Code to create assets for your business application. You can create the assets supported by Red Hat Process Automation Manager VS Code extension using the following ways: To create a business process, create a new file with .bpmn or .bpmn2 in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as Process.bpmn . To create a DMN model, create a new file with .dmn in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as AgeDecision.dmn . To create a test scenario simulation model, create a new file with .scesim in PROJECT_HOME/src/test/resources/org/kie/businessapp directory, such as TestAgeScenario.scesim . After you create the assets in your Maven archetype, navigate to the root directory (contains pom.xml ) of the project in the command line and run the following command to build the knowledge JAR (KJAR) of your project: If the build fails, address any problems described in the command line error messages and try again to validate the project until the build is successful. However, if the build is successful, you can find the artifact of your business application in PROJECT_HOME/target directory. Note Use mvn clean install command often to validate your project after each major change during development. You can deploy the generated knowledge JAR (KJAR) of your business application on a running KIE Server using the REST API. For more information about using REST API, see Interacting with Red Hat Process Automation Manager using KIE APIs .
[ "mvn archetype:generate -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024", "mvn clean install" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/proc-dmn-bpmn-maven-create_business-processes
5.7. Adding Cluster Resources
5.7. Adding Cluster Resources To specify a device for a cluster service, follow these steps: On the Resources property of the Cluster Configuration Tool , click the Create a Resource button. Clicking the Create a Resource button causes the Resource Configuration dialog box to be displayed. At the Resource Configuration dialog box, under Select a Resource Type , click the drop-down box. At the drop-down box, select a resource to configure. The resource options are described as follows: GFS Name - Create a name for the file system resource. Mount Point - Choose the path to which the file system resource is mounted. Device - Specify the device file associated with the file system resource. Options - Mount options. File System ID - When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click OK at the Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field. Force Unmount checkbox - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. With GFS resources, the mount point is not unmounted at service tear-down unless this box is checked. File System Name - Create a name for the file system resource. File System Type - Choose the file system for the resource using the drop-down menu. Mount Point - Choose the path to which the file system resource is mounted. Device - Specify the device file associated with the file system resource. Options - Mount options. File System ID - When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click OK at the Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field. Checkboxes - Specify mount and unmount actions when a service is stopped (for example, when disabling or relocating a service): Force unmount - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. Reboot host node if unmount fails - If checked, reboots the node if unmounting this file system fails. The default setting is unchecked. Check file system before mounting - If checked, causes fsck to be run on the file system before mounting it. The default setting is unchecked. IP Address IP Address - Type the IP address for the resource. Monitor Link checkbox - Check the box to enable or disable link status monitoring of the IP address resource NFS Mount Name - Create a symbolic name for the NFS mount. Mount Point - Choose the path to which the file system resource is mounted. Host - Specify the NFS server name. Export Path - NFS export on the server. NFS and NFS4 options - Specify NFS protocol: NFS - Specifies using NFSv3 protocol. The default setting is NFS . NFS4 - Specifies using NFSv4 protocol. Options - Mount options. For more information, refer to the nfs (5) man page. Force Unmount checkbox - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. NFS Client Name - Enter a name for the NFS client resource. Target - Enter a target for the NFS client resource. Supported targets are hostnames, IP addresses (with wild-card support), and netgroups. Read-Write and Read Only options - Specify the type of access rights for this NFS client resource: Read-Write - Specifies that the NFS client has read-write access. The default setting is Read-Write . Read Only - Specifies that the NFS client has read-only access. Options - Additional client access rights. For more information, refer to the exports (5) man page, General Options NFS Export Name - Enter a name for the NFS export resource. Script Name - Enter a name for the custom user script. File (with path) - Enter the path where this custom script is located (for example, /etc/init.d/ userscript ) Samba Service Name - Enter a name for the Samba server. Workgroup - Enter the Windows workgroup name or Windows NT domain of the Samba service. Note When creating or editing a cluster service, connect a Samba-service resource directly to the service, not to a resource within a service. That is, at the Service Management dialog box, use either Create a new resource for this service or Add a Shared Resource to this service ; do not use Attach a new Private Resource to the Selection or Attach a Shared Resource to the selection . When finished, click OK . Choose File => Save to save the change to the /etc/cluster/cluster.conf configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-service-dev-ca
Chapter 3. Analyzing your projects with the MTA plugin
Chapter 3. Analyzing your projects with the MTA plugin You can analyze your projects with the MTA plugin by creating a run configuration, running an analysis, and then reviewing and resolving migration issues detected by the MTA plugin. 3.1. Creating a run configuration You can create a run configuration in the Issue Explorer . A run configuration specifies the project to analyze, migration path, and additional options. You can create multiple run configurations. Each run configuration must have a unique name. Prerequisite You must import your projects into the Eclipse IDE. Procedure In the Issue Explorer , click the MTA icon ( ) to create a run configuration. On the Input tab, complete the following fields: Select a migration path. Beside the Projects field, click Add and select one or more projects. Beside the Packages field, click Add and select one or more the packages. Note Specifying the packages for analysis reduces the run time. If you do not select any packages, all packages in the project are scanned. On the Options tab, you can select Generate Report to generate an HTML report. The report is displayed in the Report tab and saved as a file. Other options are displayed. See About MTA command-line arguments in the CLI Guide for details. On the Rules tab, you can select custom rulesets that you have imported or created for the MTA plugin. Click Run to start the analysis. 3.2. Analyzing projects You can analyze your projects by running the MTA plugin with a saved run configuration. Procedure In the MTA perspective, click the Run button ( ) and select a run configuration. The MTA plugin analyzes your projects. The Issue Explorer displays migration issues that are detected with the ruleset. When you have finished analyzing your projects, stop the MTA server in the Issue Explorer to conserve memory. 3.3. Reviewing issues You can review issues identified by the MTA plugin. Procedure Click Window Show View Issue Explorer . Optional: Filter the issues by clicking the Options menu , selecting Group By and an option. Right-click and select Issue Details to view information about the issue, including its severity and how to address it. The following icons indicate the severity and state of an issue: Table 3.1. Issue icons Icon Description The issue must be fixed for a successful migration. The issue is optional to fix for migration. The issue might need to be addressed during migration. The issue was resolved. The issue is stale. The code marked as an issue was modified since the last time that MTA identified it as an issue. A quick fix is available for this issue, which is mandatory to fix for a successful migration. A quick fix is available for this issue, which is optional to fix for migration. A quick fix is available for this issue, which may potentially be an issue during migration. Double-click an issue to open the associated line of code in an editor. 3.4. Resolving issues You can resolve issues detected by the MTA plugin by performing one of the following actions: You can double-click the issue to open it in an editor and edit the source code. The issue displays a Stale icon ( ) until the time you run the MTA plugin. You can right-click the issue and select Mark as Fixed . If the issue displays a Quick Fix icon ( ), you can right-click the issue and select Preview Quick Fix and then Apply Quick Fix .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/eclipse_plugin_guide/analyzing-projects-with-plugin_eclipse-code-ready-studio-guide
5.2. In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5
5.2. In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5 Important Upgrade all Red Hat Gluster Storage servers before updating clients. In-service software upgrade refers to the ability to progressively update a Red Hat Gluster Storage Server cluster with a new version of the software without taking the volumes hosted on the cluster offline. In most cases normal I/O operations on the volume continue even when the cluster is being updated. I/O that uses CTDB may pause for the duration of an upgrade or update. This affects clients using Gluster NFS or Samba. Note After performing inservice upgrade of NFS-Ganesha, the new configuration file is saved as " ganesha.conf.rpmnew " in /etc/ganesha folder. The old configuration file is not overwritten during the inservice upgrade process. However, post upgradation, you have to manually copy any new configuration changes from " ganesha.conf.rpmnew " to the existing ganesha.conf file in /etc/ganesha folder. 5.2.1. Pre-upgrade Tasks Ensure you perform the following steps based on the set-up before proceeding with the in-service software upgrade process. 5.2.1.1. Upgrade Requirements for Red Hat Gluster Storage 3.5 The upgrade requirements to upgrade to Red Hat Gluster Storage 3.5 from the preceding update are as follows: In-service software upgrade is supported for pure and distributed versions of arbiter, erasure-coded (disperse) and three way replicated volume types. It is not supported for a pure distributed volume. If you want to use snapshots for your existing environment, each brick must be an independent thin provisioned logical volume (LV). If you do not plan to use snapshots, thickly provisioned volumes remain supported. A Logical Volume that contains a brick must not be used for any other purpose. Linear LVM and thin LV are supported with Red Hat Gluster Storage 3.4 and later. For more information, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/logical_volume_manager_administration/index#LVM_components When server-side quorum is enabled, ensure that bringing one node down does not violate server-side quorum. Add dummy peers to ensure the server-side quorum is not violated until the completion of rolling upgrade using the following command: Note If you have a geo-replication session, then to add a node follow the steps mentioned in the section Starting Geo-replication for a New Brick or New Node in the Red Hat Gluster Storage Administration Guide . For example, when the server-side quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node that does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above. In a three node cluster, if the server-side quorum percentage is set to 77%, bringing down one node would violate the server-side quorum. In this scenario, you have to add two dummy nodes to meet server-side quorum. Stop any geo-replication sessions running between the master and slave. Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command: Ensure the Red Hat Gluster Storage server is registered to the required channels. On Red Hat Enterprise Linux 6: On Red Hat Enterprise Linux 7: On Red Hat Enterprise Linux 8: To subscribe to the channels, run the following command: 5.2.1.2. Restrictions for In-Service Software Upgrade The following lists some of the restrictions for in-service software upgrade: In-service upgrade for NFS-Ganesha clusters is supported only from Red Hat Gluster Storage 3.4 and beyond. If you are upgrading from Red Hat Gluster Storage 3.1 and you use NFS-Ganesha, offline upgrade to Red Hat Gluster Storage 3.4. Then use in-service upgrade method to upgrade to Red Hat Gluster Storage 3.5. Erasure coded (dispersed) volumes can be upgraded while in-service only if the disperse.optimistic-change-log , disperse.eager-lock and disperse.other-eager-lock options are set to off . Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. Post upgrade the values can be changed to resemble the pre-upgrade values on erasure coded volumes, but ensure that disperse.optimistic-change-log and disperse.other-eager-lock options are set to on . Ensure that the system workload is low before performing the in-service software upgrade, so that the self-heal process does not have to heal too many entries during the upgrade. Also, with high system workload healing is time-consuming. Do not perform any volume operations on the Red Hat Gluster Storage server. Do not change hardware configurations. Do not run mixed versions of Red Hat Gluster Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Gluster Storage 3.3, Red Hat Gluster Storage 3.4, and Red Hat Gluster Storage 3.5 for a prolonged time. Do not combine different upgrade methods. It is not recommended to use in-service software upgrade for migrating to thin provisioned volumes, but to use offline upgrade scenario instead. For more information see, Section 5.1, "Offline Upgrade to Red Hat Gluster Storage 3.5" 5.2.1.3. Configuring repo for Upgrading using ISO To configure the repo to upgrade using ISO, execute the following steps: Note Upgrading Red Hat Gluster Storage using ISO can be performed only from the immediately preceding release. This means that upgrading to Red Hat Gluster Storage 3.5 using ISO can only be done from Red Hat Gluster Storage 3.4. For a complete list of supported Red Hat Gluster Storage releases, see Section 1.5, "Red Hat Gluster Storage Software Components and Versions" . Mount the ISO image file under any directory using the following command: For example: Set the repo options in a file in the following location: Add the following information to the repo file: 5.2.1.4. Preparing and Monitoring the Upgrade Activity Before proceeding with the in-service software upgrade, prepare and monitor the following processes: Check the peer and volume status to ensure that all peers are connected and there are no active volume tasks. Check the rebalance status using the following command: If you need to upgrade an erasure coded (dispersed) volume, set the disperse.optimistic-change-log , disperse.eager-lock and disperse.other-eager-lock options to off . Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. Ensure that there are no pending self-heals by using the following command: The following example shows no pending self-heals. 5.2.2. Service Impact of In-Service Upgrade In-service software upgrade impacts the following services. Ensure you take the required precautionary measures. Gluster NFS (Deprecated) Warning Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS, and does not support its use in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. When you use Gluster NFS (Deprecated) to mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in-service software upgrade until the server is upgraded. Samba / CTDB Ongoing I/O on Samba shares will fail as the Samba shares will be temporarily unavailable during the in-service software upgrade, hence it is recommended to stop the Samba service using the following command: Distribute Volume In-service software upgrade is not supported for distributed volume. If you have a distributed volume in the cluster, stop that volume for the duration of the upgrade. Virtual Machine Store The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly. Hence, if you are using a gluster volume for storing virtual machine images, then it is recommended to power-off all virtual machine instances before in-service software upgrade. Important It is recommended to turn off all virtual machine instances to minimize the changes that occur during an In-Service Upgrade. If you choose to keep the VM instances up and running , the time taken to heal will increase and at times may cause a situation where the pending heal remains a constant number and takes a long time to close to 0. 5.2.3. In-Service Software Upgrade Note Only offline mode is supported for samba. The following steps have to be performed on each node of the replica pair: Back up the following configuration directory and files in a location that is not on the operating system partition. /var/lib/glusterd /etc/samba /etc/ctdb /etc/glusterfs /var/lib/samba /var/lib/ctdb /var/run/gluster/shared_storage/nfs-ganesha Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . If you use NFS-Ganesha, back up the following files from all nodes: /run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf /etc/ganesha/ganesha.conf /etc/ganesha/ganesha-ha.conf Important If you are updating from RHEL 8.3 to RHEL 8.4, then follow these additional steps. On one node in the cluster, edit /etc/corosync/corosync.conf. Add a line "token: 3000" to the totem stanza, for example: Run `pcs cluster sync`. It is optional to verify /etc/corosync/corosync.conf on all nodes has the new token: 3000 line. Run `pcs cluster reload corosync` Run `corosync-cmapctl | grep totem.token, confirm that "runtime.config.totem.token (u32) = 3000" as output. If the node is part of an NFS-Ganesha cluster, place the node in standby mode. Ensure that there are no pending self-heal operations. If this node is part of an NFS-Ganesha cluster: Disable the PCS cluster and verify that it has stopped. Stop the nfs-ganesha service. Stop all gluster services on the node and verify that they have stopped. Verify that your system is not using the legacy Red Hat Classic update software. If your system uses this legacy software, migrate to Red Hat Subscription Manager and verify that your status has changed when migration is complete. Update the server using the following command: Important Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7: Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8: If the volumes are thick provisioned, and you plan to use snapshots, perform the following steps to migrate to thin provisioned volumes: Note Migrating from thick provisioned volume to thin provisioned volume during in-service software upgrade takes a significant amount of time based on the data you have in the bricks. If you do not plan to use snapshots, you can skip this step. However, if you plan to use snapshots on your existing environment, the offline method to upgrade is recommended. For more information regarding offline upgrade, see Section 5.1, "Offline Upgrade to Red Hat Gluster Storage 3.5" Contact a Red Hat Support representative before migrating from thick provisioned volumes to thin provisioned volumes using in-service software upgrade. Unmount all the bricks associated with the volume by executing the following command: Remove the LVM associated with the brick by executing the following command: For example: Remove the volume group by executing the following command: For example: Remove the physical volume by executing the following command: If the physical volume (PV) is not created, then create the PV for a RAID 6 volume by executing the following command, else proceed with the step: For more information, see Formatting and Mounting Bricks in the Red Hat Gluster Storage 3.5 Administration Guide . Create a single volume group from the PV by executing the following command: For example: Create a thinpool using the following command: For example: Create a thin volume from the pool by executing the following command: For example: Create filesystem in the new volume by executing the following command: For example: The back-end is now converted to a thin provisioned volume. Mount the thin provisioned volume to the brick directory and setup the extended attributes on the bricks. For example: Disable glusterd. This prevents it starting during boot time, so that you can ensure the node is healthy before it rejoins the cluster. Reboot the server. Important Perform this step only for each thick provisioned volume that has been migrated to thin provisioned volume in the step. Change the Automatic File Replication extended attributes from another node, so that the heal process is executed from a brick in the replica subvolume to the thin provisioned brick. Create a FUSE mount point to edit the extended attributes. Create a new directory on the mount point, and ensure that a directory with such a name is not already present. Delete the directory and set the extended attributes. Ensure that the extended attributes of the brick in the replica subvolume is not set to zero. In the following example, the extended attribute trusted.afr.repl3-client-1 for /dev/RHS_vg/brick2 is not set to zero: Start the glusterd service. Verify that you have upgraded to the latest version of Red Hat Gluster Storage. Ensure that all bricks are online. For example: Start self-heal on the volume. Ensure that self-heal on the volume is complete. The following example shows a completed self heal operation. Verify that shared storage is mounted. If this node is part of an NFS-Ganesha cluster: If the system is managed by SELinux, set the ganesha_use_fusefs Boolean to on . Start the NFS-Ganesha service. Enable and start the cluster. Release the node from standby mode. Verify that the pcs cluster is running, and that the volume is being exported correctly after upgrade. NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you see NFS Server Now NOT IN GRACE in the ganesha.log file before continuing. Optionally, enable the glusterd service to start at boot time. Repeat the above steps on the other node of the replica pair. In the case of a distributed-replicate setup, repeat the above steps on all the replica pairs. Important If you are updating from RHEL 8.3 to RHEL 8.4, then follow these additional steps. Restore the totem-token timeout to the original value, for example, by deleting the token: 3000 line from /etc/corosync/corosync.conf Run `pcs cluster sync`. It is optional to verify /etc/corosync/corosync.conf on all nodes has the new token: 3000 line. Run `pcs cluster reload corosync`. Run `corosync-cmapctl | grep totem.token to verify the changes. When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster. Note 70200 is the cluster.op-version value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command: The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, "Red Hat Gluster Storage Software Components and Versions" for the correct cluster.op-version value for other versions. Note If you want to enable snapshots, see Managing Snapshots in the Red Hat Gluster Storage 3.5 Administration Guide . Important Remount the Red Hat Gluster Storage volume on the client side, in the case of updating the Red Hat Gluster Storage nodes enabled with network encryption using TLS/SSL. For more information, refer to Configuring Network Encryption in Red Hat Gluster Storage If the client-side quorum was disabled before upgrade, then upgrade it by executing the following command: If a dummy node was created earlier, then detach it by executing the following command: If the geo-replication session between master and slave was disabled before upgrade, then configure the meta volume and restart the session: Run the following command: If you use a non-root user to perform geo-replication, run this command on the primary slave to set permissions for the non-root group. Run the following command: Run the following command: If you disabled the disperse.optimistic-change-log , disperse.eager-lock and disperse.other-eager-lock options in order to upgrade an erasure-coded (dispersed) volume, re-enable these settings. 5.2.4. Special Consideration for In-Service Software Upgrade The following sections describe the in-service software upgrade steps for a CTDB setup. Note Only offline mode is supported. 5.2.4.1. Upgrading the Native Client All clients must use the same version of glusterfs-fuse . Red Hat strongly recommends that you upgrade servers before upgrading clients. If you are upgrading from Red Hat Gluster Storage 3.1 Update 2 or earlier, you must upgrade servers and clients simultaneously. For more information regarding upgrading native client, refer the steps mentioned below. Before updating the Native Client, subscribe the clients to the channels mentioned in Section 6.2.1, "Installing Native Client" . Warning If you want to access a volume being provided by a server using Red Hat Gluster Storage 3.1.3 or higher, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing these volumes from earlier client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 changed how the Distributed Hash Table works in order to improve directory consistency and remove the effects. Unmount gluster volumes Unmount any gluster volumes prior to upgrading the native client. Upgrade the client Run the yum update command to upgrade the native client: Remount gluster volumes Remount the volumes as discussed in Section 6.2.3, "Mounting Red Hat Gluster Storage Volumes" .
[ "gluster peer probe dummynode", "gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop", "gluster volume heal volname info", "rhel-6-server-rpms rhel-scalefs-for-rhel-6-server-rpms rhs-3-for-rhel-6-server-rpms", "rhel-7-server-rpms rh-gluster-3-for-rhel-7-server-rpms", "rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms rh-gluster-3-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable= repo-name", "mount -o loop < ISO image file > < mount-point >", "mount -o loop rhgs-3.5-rhel-7-x86_64-dvd-1.iso /mnt", "/etc/yum.repos.d/< file_name.repo >", "[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0", "gluster peer status", "gluster volume status", "gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00", "gluster volume set volname disperse.optimistic-change-log off gluster volume set volname disperse.eager-lock off gluster volume set volname disperse.other-eager-lock off", "gluster volume heal volname info", "gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0", "service ctdb stop ;Stopping CTDB will also stop the SMB service.", "gluster volume stop < VOLNAME >", "totem { version: 2 secauth: off cluster_name: rhel8-cluster transport: knet token: 3000 }", "pcs node standby", "gluster volume heal volname info", "pcs cluster disable pcs status", "systemctl stop nfs-ganesha", "systemctl stop glusterd pkill glusterfs pkill glusterfsd pgrep gluster", "migrate-rhs-classic-to-rhsm --status", "migrate-rhs-classic-to-rhsm --rhn-to-rhsm", "migrate-rhs-classic-to-rhsm --status", "yum update", "yum install nfs-ganesha-selinux", "dnf install glusterfs-ganesha", "umount mount_point", "lvremove logical_volume_name", "lvremove /dev/RHS_vg/brick1", "vgremove -ff volume_group_name", "vgremove -ff RHS_vg", "pvremove -ff physical_volume", "pvcreate --dataalignment 2560K /dev/vdb", "vgcreate volume_group_name disk", "vgcreate RHS_vg /dev/vdb", "lvcreate -L size --poolmetadatasize md_size --chunksize chunk_size -T pool_device", "lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_pool", "lvcreate -V size -T pool_device -n thinvol_name", "lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol", "mkfs.xfs -i size=512 thin_vol", "mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol", "setfattr -n trusted.glusterfs.volume-id \\ -v 0xUSD(grep volume-id /var/lib/glusterd/vols/ volname /info \\ | cut -d= -f2 | sed 's/-//g') USDbrick", "systemctl disable glusterd", "shutdown -r now \"Shutting down for upgrade to Red Hat Gluster Storage 3.5\"", "mount -t glusterfs HOSTNAME_or_IPADDRESS :/ VOLNAME / MOUNTDIR", "mkdir / MOUNTDIR / name-of-nonexistent-dir", "rmdir / MOUNTDIR / name-of-nonexistent-dir", "setfattr -n trusted.non-existent-key -v abc / MOUNTDIR setfattr -x trusted.non-existent-key / MOUNTDIR", "getfattr -d -m. -e hex brick_path", "getfattr -d -m. -e hex /dev/RHS_vg/brick2 getfattr: Removing leading '/' from absolute path names file: /dev/RHS_vg/brick2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.repl3-client-1=0x000000000000000400000002 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0x924c2e2640d044a687e2c370d58abec9", "systemctl start glusterd", "gluster --version", "gluster volume status", "gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks", "gluster volume heal volname", "gluster volume heal volname info", "gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0", "mount | grep /run/gluster/shared_storage", "setsebool -P ganesha_use_fusefs on", "systemctl start nfs-ganesha", "pcs cluster enable pcs cluster start", "pcs node unstandby", "pcs status showmount -e", "systemctl enable glusterd", "gluster volume set all cluster.op-version 70200", "gluster volume heal USDVOLNAME granular-entry-heal enable", "gluster volume set volname cluster.quorum-type auto", "gluster peer detach <dummy_node name>", "gluster volume set all cluster.enable-shared-storage enable", "gluster-mountbroker setup <mount_root> <group>", "gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start", "gluster volume set volname disperse.optimistic-change-log on gluster volume set volname disperse.eager-lock on gluster volume set volname disperse.other-eager-lock on", "umount /mnt/glusterfs", "yum update glusterfs glusterfs-fuse" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/sect-in-service_software_upgrade_from_red_hat_storage_3.4_to_red_hat_storage_3.5
Chapter 4. Installing RHEL using Satellite Server
Chapter 4. Installing RHEL using Satellite Server Red Hat Satellite is a centralized tool for provisioning, remote management, and monitoring of multiple Red Hat Enterprise Linux deployments. With Satellite, you can deploy physical and virtual hosts. This includes setting up the required network topology, configuring the necessary services, and providing necessary configuration information to provision hosts on your network. For more information, see the Red Hat Satellite .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/installing-rhel-using-satellite-server_rhel-installer
Chapter 16. Generating CRL on the IdM CA server
Chapter 16. Generating CRL on the IdM CA server If your IdM deployment uses an embedded certificate authority (CA), you may need to move generating the Certificate Revocation List (CRL) from one Identity Management (IdM) server to another. It can be necessary, for example, when you want to migrate the server to another system. Only configure one server to generate the CRL. The IdM server that performs the CRL publisher role is usually the same server that performs the CA renewal server role, but this is not mandatory. Before you decommission the CRL publisher server, select and configure another server to perform the CRL publisher server role. 16.1. Stopping CRL generation on an IdM server To stop generating the Certificate Revocation List (CRL) on the IdM CRL publisher server, use the ipa-crlgen-manage command. Before you disable the generation, verify that the server really generates CRL. You can then disable it. Prerequisites Identity Management (IdM) server is installed on the RHEL 8.1 system or newer. You must be logged in as root. Procedure Check if your server is generating the CRL: Stop generating the CRL on the server: Check if the server stopped generating CRL: The server stopped generating the CRL. The step is to enable CRL generation on the IdM replica. 16.2. Starting CRL generation on an IdM replica server You can start generating the Certificate Revocation List (CRL) on an IdM CA server with the ipa-crlgen-manage command. Prerequisites Identity Management (IdM) server is installed on the RHEL 8.1 system or newer. The RHEL system must be an IdM Certificate Authority server. You must be logged in as root. Procedure Start generating the CRL: Check if the CRL is generated: 16.3. Changing the CRL update interval The Certificate Revocation List (CRL) file is automatically generated by the Identity Management Certificate Authority (Idm CA) every four hours by default. You can change this interval with the following procedure. Procedure Stop the CRL generation server: Open the /var/lib/pki/pki-tomcat/conf/ca/CS.cfg file, and change the ca.crl.MasterCRL.autoUpdateInterval value to the new interval setting. For example, to generate the CRL every 60 minutes: Note If you update the ca.crl.MasterCRL.autoUpdateInterval parameter, the change will become effective after the already scheduled CRL update. Start the CRL generation server: Additional resources For more information about the CRL generation on an IdM replica server, see Starting CRL generation on an IdM replica server .
[ "ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:00:00 Last CRL Number: 6 The ipa-crlgen-manage command was successful", "ipa-crlgen-manage disable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd CRL generation disabled on the local host. Please make sure to configure CRL generation on another master with ipa-crlgen-manage enable. The ipa-crlgen-manage command was successful", "ipa-crlgen-manage status", "ipa-crlgen-manage enable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd Forcing CRL update CRL generation enabled on the local host. Please make sure to have only a single CRL generation master. The ipa-crlgen-manage command was successful", "ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:10:00 Last CRL Number: 7 The ipa-crlgen-manage command was successful", "systemctl stop [email protected]", "ca.crl.MasterCRL.autoUpdateInterval= 60", "systemctl start [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/generating-crl-on-the-idm-ca-server_working-with-idm-certificates
Chapter 46. TieredStorageCustom schema reference
Chapter 46. TieredStorageCustom schema reference Used in: KafkaClusterSpec Full list of TieredStorageCustom schema properties Enables custom tiered storage for Kafka. If you want to use custom tiered storage, you must first add a tiered storage for Kafka plugin to the Streams for Apache Kafka image by building a custom container image. Custom tiered storage configuration enables the use of a custom RemoteStorageManager configuration. RemoteStorageManager is a Kafka interface for managing the interaction between Kafka and remote tiered storage. If custom tiered storage is enabled, Streams for Apache Kafka uses the TopicBasedRemoteLogMetadataManager for Remote Log Metadata Management (RLMM). Warning Tiered storage is an early access Kafka feature, which is also available in Streams for Apache Kafka. Due to its current limitations , it is not recommended for production environments. Example custom tiered storage configuration kafka: tieredStorage: type: custom remoteStorageManager: className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: # A map with String keys and String values. # Key properties are automatically prefixed with `rsm.config.` # and appended to Kafka broker config. storage.bucket.name: my-bucket config: ... # Additional RLMM configuration can be added through the Kafka config # under `spec.kafka.config` using the `rlmm.config.` prefix. rlmm.config.remote.log.metadata.topic.replication.factor: 1 46.1. TieredStorageCustom schema properties The type property is a discriminator that distinguishes use of the TieredStorageCustom type from other subtypes which may be added in the future. It must have the value custom for the type TieredStorageCustom . Property Property type Description type string Must be custom . remoteStorageManager RemoteStorageManager Configuration for the Remote Storage Manager.
[ "kafka: tieredStorage: type: custom remoteStorageManager: className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: # A map with String keys and String values. # Key properties are automatically prefixed with `rsm.config.` # and appended to Kafka broker config. storage.bucket.name: my-bucket config: # Additional RLMM configuration can be added through the Kafka config # under `spec.kafka.config` using the `rlmm.config.` prefix. rlmm.config.remote.log.metadata.topic.replication.factor: 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-tieredstoragecustom-reference
Chapter 1. Preparing to install on OpenStack
Chapter 1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. 1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead
[ "#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi", "x509: certificate relies on legacy Common Name field, use SANs instead", "openstack catalog list", "host=<host_name>", "port=<port_number>", "openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName", "X509v3 Subject Alternative Name: DNS:your.host.example.net", "x509: certificate relies on legacy Common Name field, use SANs instead" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/preparing-to-install-on-openstack
Appendix B. Provisioning FIPS-compliant hosts
Appendix B. Provisioning FIPS-compliant hosts Satellite supports provisioning hosts that comply with the National Institute of Standards and Technology's Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS. To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks: Change the provisioning password hashing algorithm for the operating system Create a host group and set a host group parameter to enable FIPS For more information, see Creating a Host Group in Managing hosts . The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Section B.3, "Verifying FIPS mode is enabled" . B.1. Changing the provisioning password hashing algorithm To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant. Procedure Identify the Operating System IDs: Update each operating system's password hash value. Note that you cannot use a comma-separated list of values. B.2. Setting the FIPS-enabled parameter To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true . If this is not set to true , or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group. To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command. For more information, see the output of the command hammer hostgroup set-parameter --help . B.3. Verifying FIPS mode is enabled To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration. Procedure Log in to the host as root or with an admin-level account. Enter the following command: A value of 1 confirms that FIPS mode is enabled.
[ "hammer os list", "hammer os update --password-hash SHA256 --title \" My_Operating_System \"", "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name fips_enabled --value \"true\"", "cat /proc/sys/crypto/fips_enabled" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/Provisioning_FIPS_Compliant_Hosts_provisioning
Networking
Networking OpenShift Container Platform 4.10 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/index
Red Hat Ansible Automation Platform Planning Guide
Red Hat Ansible Automation Platform Planning Guide Red Hat Ansible Automation Platform 2.3 This document provides requirements, options, and recommendations for installing Red Hat Ansible Automation Platform. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/making-open-source-more-inclusive
Chapter 63. registered
Chapter 63. registered This chapter describes the commands under the registered command. 63.1. registered limit create Create a registered limit Usage: Table 63.1. Positional arguments Value Summary <resource-name> The name of the resource to limit Table 63.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the registered limit --region <region> Region for the registered limit to affect --service <service> Service responsible for the resource to limit (required) --default-limit <default-limit> The default limit for the resources to assume (required) Table 63.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 63.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.2. registered limit delete Delete a registered limit Usage: Table 63.7. Positional arguments Value Summary <registered-limit-id> Registered limit to delete (id) Table 63.8. Command arguments Value Summary -h, --help Show this help message and exit 63.3. registered limit list List registered limits Usage: Table 63.9. Command arguments Value Summary -h, --help Show this help message and exit --service <service> Service responsible for the resource to limit --resource-name <resource-name> The name of the resource to limit --region <region> Region for the limit to affect. Table 63.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 63.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 63.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.4. registered limit set Update information about a registered limit Usage: Table 63.14. Positional arguments Value Summary <registered-limit-id> Registered limit to update (id) Table 63.15. Command arguments Value Summary -h, --help Show this help message and exit --service <service> Service to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --resource-name <resource-name> Resource to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --default-limit <default-limit> The default limit for the resources to assume --description <description> Description to update of the registered limit --region <region> Region for the registered limit to affect. either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry Table 63.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 63.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.5. registered limit show Display registered limit details Usage: Table 63.20. Positional arguments Value Summary <registered-limit-id> Registered limit to display (id) Table 63.21. Command arguments Value Summary -h, --help Show this help message and exit Table 63.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 63.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack registered limit create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--region <region>] --service <service> --default-limit <default-limit> <resource-name>", "openstack registered limit delete [-h] <registered-limit-id> [<registered-limit-id> ...]", "openstack registered limit list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--service <service>] [--resource-name <resource-name>] [--region <region>]", "openstack registered limit set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--service <service>] [--resource-name <resource-name>] [--default-limit <default-limit>] [--description <description>] [--region <region>] <registered-limit-id>", "openstack registered limit show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <registered-limit-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/registered
Chapter 1. Overview
Chapter 1. Overview Red Hat OpenShift Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. Red Hat OpenShift Data Foundation is integrated into the latest Red Hat OpenShift Container Platform to address platform services, application portability, and persistence challenges. It provides a highly scalable backend for the generation of cloud-native applications, built on a technology stack that includes Red Hat Ceph Storage, the Rook.io Operator, and NooBaa's Multicloud Object Gateway technology. Red Hat OpenShift Data Foundation is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS Validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . Red Hat OpenShift Data Foundation provides a trusted, enterprise-grade application development environment that simplifies and enhances the user experience across the application lifecycle in a number of ways: Provides block storage for databases. Shared file storage for continuous integration, messaging, and data aggregation. Object storage for cloud-first development, archival, backup, and media storage. Scale applications and data exponentially. Attach and detach persistent data volumes at an accelerated rate. Stretch clusters across multiple data-centers or availability zones. Establish a comprehensive application container registry. Support the generation of OpenShift workloads such as Data Analytics, Artificial Intelligence, Machine Learning, Deep Learning, and Internet of Things (IoT). Dynamically provision not only application containers, but data service volumes and containers, as well as additional OpenShift Container Platform nodes, Elastic Block Store (EBS) volumes and other infrastructure services. 1.1. About this release Red Hat OpenShift Data Foundation 4.18 (RHXX-xxxx-xxxx) is now available. New enhancements, features, and known issues that pertain to OpenShift Data Foundation 4.18 are included in this topic. Red Hat OpenShift Data Foundation 4.18 is supported on the Red Hat OpenShift Container Platform version 4.18. For more information, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For Red Hat OpenShift Data Foundation life cycle information, refer Red Hat OpenShift Data Foundation Life Cycle .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/overview
Integrating
Integrating Red Hat Advanced Cluster Security for Kubernetes 4.6 Integrating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
[ "eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve", "kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name>", "kubectl -n stackrox delete pod -l app=central", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role-name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:user/<role-name>\" ] }, \"Action\": \"sts:AssumeRole\" } ] }", "export ROX_API_TOKEN=<api_token>", "roxctl central debug dump --token-file <token_file>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "docker login registry.redhat.io", "docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3", "docker run -e ROX_API_TOKEN=USDROX_API_TOKEN -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 -e USDROX_CENTRAL_ADDRESS <command>", "docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 version", "version: 2 jobs: check-policy-compliance: docker: - image: 'circleci/node:latest' auth: username: USDDOCKERHUB_USER password: USDDOCKERHUB_PASSWORD steps: - checkout - run: name: Install roxctl command: | curl -H \"Authorization: Bearer USDROX_API_TOKEN\" https://USDSTACKROX_CENTRAL_HOST:443/api/cli/download/roxctl-linux -o roxctl && chmod +x ./roxctl - run: name: Scan images for policy deviations and vulnerabilities command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_registry/repo/image_name>\" 1 - run: name: Scan deployment files for policy deviations command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_deployment_file>\" 2 # Important note: This step assumes the YAML file you'd like to test is located in the project. workflows: version: 2 build_and_test: jobs: - check-policy-compliance", "example.com/slack-webhook: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX", "{ \"alert\": { \"id\": \"<id>\", \"time\": \"<timestamp>\", \"policy\": { \"name\": \"<name>\", }, }, \"<custom_field_1>\": \"<custom_value_1>\" }", "annotations: jira/project-key: <jira_project_key>", "{ \"customfield_10004\": 3, \"customfield_20005\": \"Alerts\", }", "annotations: jira/project-key: <jira_project_key>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug log --level Debug --modules notifiers/jira", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug dump", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug log --level Info", "annotations: email: <email_address>", "annotations: email: <email_address>", "eksctl utils associate-iam-oidc-provider --cluster <cluster_name> --approve", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role_name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }", "oc -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role_name> 1", "oc -n stackrox delete pod -l \"app in (central,sensor)\" 1", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_provider_arn>\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<oidc_provider_name>:aud\": \"openshift\" } } } ] }", "AWS_ROLE_ARN=<role_arn> AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/openshift/serviceaccount/token", "oc annotate serviceaccount \\ 1 central \\ 2 --namespace stackrox iam.gke.io/gcp-service-account=<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com", "gcloud iam workload-identity-pools create rhacs-pool --location=\"global\" --display-name=\"RHACS workload pool\"", "gcloud iam workload-identity-pools providers create-oidc rhacs-provider --location=\"global\" --workload-identity-pool=\"rhacs-pool\" --display-name=\"RHACS provider\" --attribute-mapping=\"google.subject=assertion.sub\" --issuer-uri=\"https://<oidc_configuration_url>\" --allowed-audiences=openshift", "gcloud iam service-accounts add-iam-policy-binding <GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member=\"principal://iam.googleapis.com/projects/<GSA_PROJECT_NUMBER>/locations/global/workloadIdentityPools/rhacs-provider/subject/system:serviceaccount:stackrox:central\" 1", "{ \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<GSA_PROJECT_ID>/locations/global/workloadIdentityPools/rhacs-pool/providers/rhacs-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }", "apiVersion: v1 kind: Secret metadata: name: gcp-cloud-credentials namespace: stackrox data: credentials: <base64_encoded_json>", "{ \"TimeGenerated\": \"2024-09-03T10:56:58.5010069Z\", 1 \"msg\": { 2 \"id\": \"1abe30d1-fa3a-xxxx-xxxx-781f0a12228a\", 3 \"policy\" : {} } }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html-single/integrating/index
Chapter 7. Diagnosing and correcting problems with GFS2 file systems
Chapter 7. Diagnosing and correcting problems with GFS2 file systems The following procedures describe some common GFS2 issues and provide information on how to address them. 7.1. GFS2 file system unavailable to a node (the GFS2 withdraw function) The GFS2 withdraw function is a data integrity feature of the GFS2 file system that prevents potential file system damage due to faulty hardware or kernel software. If the GFS2 kernel module detects an inconsistency while using a GFS2 file system on any given cluster node, it withdraws from the file system, leaving it unavailable to that node until it is unmounted and remounted (or the machine detecting the problem is rebooted). All other mounted GFS2 file systems remain fully functional on that node. (The GFS2 withdraw function is less severe than a kernel panic, which causes the node to be fenced.) The main categories of inconsistency that can cause a GFS2 withdraw are as follows: Inode consistency error Resource group consistency error Journal consistency error Magic number metadata consistency error Metadata type consistency error An example of an inconsistency that would cause a GFS2 withdraw is an incorrect block count for a file's inode. When GFS2 deletes a file, it systematically removes all the data and metadata blocks referenced by that file. When done, it checks the inode's block count. If the block count is not 1 (meaning all that is left is the disk inode itself), that indicates a file system inconsistency, since the inode's block count did not match the actual blocks used for the file. In many cases, the problem may have been caused by faulty hardware (faulty memory, motherboard, HBA, disk drives, cables, and so forth). It may also have been caused by a kernel bug (another kernel module accidentally overwriting GFS2's memory), or actual file system damage (caused by a GFS2 bug). In most cases, the best way to recover from a withdrawn GFS2 file system is to reboot or fence the node. The withdrawn GFS2 file system will give you an opportunity to relocate services to another node in the cluster. After services are relocated you can reboot the node or force a fence with this command. Warning Do not try to unmount and remount the file system manually with the umount and mount commands. You must use the pcs command, otherwise Pacemaker will detect the file system service has disappeared and fence the node. The consistency problem that caused the withdraw may make stopping the file system service impossible as it may cause the system to hang. If the problem persists after a remount, you should stop the file system service to unmount the file system from all nodes in the cluster, then perform a file system check with the fsck.gfs2 command before restarting the service with the following procedure. Reboot the affected node. Disable the non-clone file system service in Pacemaker to unmount the file system from every node in the cluster. From one node of the cluster, run the fsck.gfs2 command on the file system device to check for and repair any file system damage. Remount the GFS2 file system from all nodes by re-enabling the file system service: You can override the GFS2 withdraw function by mounting the file system with the -o errors=panic option specified in the file system service. When this option is specified, any errors that would normally cause the system to withdraw force a kernel panic instead. This stops the node's communications, which causes the node to be fenced. This is especially useful for clusters that are left unattended for long periods of time without monitoring or intervention. Internally, the GFS2 withdraw function works by disconnecting the locking protocol to ensure that all further file system operations result in I/O errors. As a result, when the withdraw occurs, it is normal to see a number of I/O errors from the device mapper device reported in the system logs. 7.2. GFS2 file system hangs and requires reboot of one node If your GFS2 file system hangs and does not return commands run against it, but rebooting one specific node returns the system to normal, this may be indicative of a locking problem or bug. Should this occur, gather GFS2 data during one of these occurences and open a support ticket with Red Hat Support, as described in Gathering GFS2 data for troubleshooting . 7.3. GFS2 file system hangs and requires reboot of all nodes If your GFS2 file system hangs and does not return commands run against it, requiring that you reboot all nodes in the cluster before using it, check for the following issues. You may have had a failed fence. GFS2 file systems will freeze to ensure data integrity in the event of a failed fence. Check the messages logs to see if there are any failed fences at the time of the hang. Ensure that fencing is configured correctly. The GFS2 file system may have withdrawn. Check through the messages logs for the word withdraw and check for any messages and call traces from GFS2 indicating that the file system has been withdrawn. A withdraw is indicative of file system corruption, a storage failure, or a bug. At the earliest time when it is convenient to unmount the file system, you should perform the following procedure: Reboot the node on which the withdraw occurred. Stop the file system resource to unmount the GFS2 file system on all nodes. Capture the metadata with the gfs2_edit savemeta... command. You should ensure that there is sufficient space for the file, which in some cases may be large. In this example, the metadata is saved to a file in the /root directory. Update the gfs2-utils package. On one node, run the fsck.gfs2 command on the file system to ensure file system integrity and repair any damage. After the fsck.gfs2 command has completed, re-enable the file system resource to return it to service: Open a support ticket with Red Hat Support. Inform them you experienced a GFS2 withdraw and provide logs and the debugging information generated by the sosreports and gfs2_edit savemeta commands. In some instances of a GFS2 withdraw, commands can hang that are trying to access the file system or its block device. In these cases a hard reboot is required to reboot the cluster. For information about the GFS2 withdraw function, see GFS2 filesystem unavailable to a node (the GFS2 withdraw function) . This error may be indicative of a locking problem or bug. Gather data during one of these occurrences and open a support ticket with Red Hat Support, as described in Gathering GFS2 data for troubleshooting . 7.4. GFS2 file system does not mount on newly added cluster node If you add a new node to a cluster and find that you cannot mount your GFS2 file system on that node, you may have fewer journals on the GFS2 file system than nodes attempting to access the GFS2 file system. You must have one journal per GFS2 host you intend to mount the file system on (with the exception of GFS2 file systems mounted with the spectator mount option set, since these do not require a journal). You can add journals to a GFS2 file system with the gfs2_jadd command, as described in Adding journals to a GFS2 file system . 7.5. Space indicated as used in empty file system If you have an empty GFS2 file system, the df command will show that there is space being taken up. This is because GFS2 file system journals consume space (number of journals * journal size) on disk. f you created a GFS2 file system with a large number of journals or specified a large journal size then you will be see (number of journals * journal size) as already in use when you execute the df command. Even if you did not specify a large number of journals or large journals, small GFS2 file systems (in the 1GB or less range) will show a large amount of space as being in use with the default GFS2 journal size. 7.6. Gathering GFS2 data for troubleshooting If your GFS2 file system hangs and does not return commands run against it and you find that you need to open a ticket with Red Hat Support, you should first gather the following data: The GFS2 lock dump for the file system on each node: The DLM lock dump for the file system on each node: You can get this information with the dlm_tool : In this command, lsname is the lockspace name used by DLM for the file system in question. You can find this value in the output from the group_tool command. The output from the sysrq -t command. The contents of the /var/log/messages file. Once you have gathered that data, you can open a ticket with Red Hat Support and provide the data you have collected.
[ "pcs stonith fence node", "pcs resource disable --wait=100 mydata_fs", "fsck.gfs2 -y /dev/vg_mydata/mydata > /tmp/fsck.out", "pcs resource enable --wait=100 mydata_fs", "pcs resource update mydata_fs \"options=noatime,errors=panic\"", "/sbin/reboot", "pcs resource disable --wait=100 mydata_fs", "gfs2_edit savemeta /dev/vg_mydata/mydata /root/gfs2metadata.gz", "sudo yum update gfs2-utils", "fsck.gfs2 -y /dev/vg_mydata/mydata > /tmp/fsck.out", "pcs resource enable --wait=100 mydata_fs", "cat /sys/kernel/debug/gfs2/ fsname /glocks >glocks. fsname . nodename", "dlm_tool lockdebug -sv lsname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/assembly_troubleshooting-gfs2-configuring-gfs2-file-systems
Chapter 1. Operator APIs
Chapter 1. Operator APIs 1.1. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. CloudCredential [operator.openshift.io/v1] Description CloudCredential provides a means to configure an operator to manage CredentialsRequests. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ClusterCSIDriver [operator.openshift.io/v1] Description ClusterCSIDriver object allows management and configuration of a CSI driver operator installed by default in OpenShift. Name of the object must be name of the CSI driver it operates. See CSIDriverName type for list of allowed values. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. Console [operator.openshift.io/v1] Description Console provides a means to configure an operator to manage the console. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. Config [imageregistry.operator.openshift.io/v1] Description Config is the configuration object for a registry instance managed by the registry operator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. Config [samples.operator.openshift.io/v1] Description Config contains the configuration and detailed condition status for the Samples Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. DNS [operator.openshift.io/v1] Description DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. This supports the DNS-based service discovery specification: https://github.com/kubernetes/dns/blob/master/docs/specification.md More details: https://kubernetes.io/docs/tasks/administer-cluster/coredns Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.11. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.12. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.13. ImagePruner [imageregistry.operator.openshift.io/v1] Description ImagePruner is the configuration object for an image registry pruner managed by the registry operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.14. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.15. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.16. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.17. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.18. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.19. KubeStorageVersionMigrator [operator.openshift.io/v1] Description KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.20. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.21. Network [operator.openshift.io/v1] Description Network describes the cluster's desired network configuration. It is consumed by the cluster-network-operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.22. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.23. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. A Secret called <name>-ca with two data keys: tls.key - the private key tls.crt - the CA certificate A ConfigMap called <name>-ca with a single data key: cabundle.crt - the CA certificate(s) A Secret called <name>-cert with two data keys: tls.key - the private key tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object 1.25. ServiceCA [operator.openshift.io/v1] Description ServiceCA provides information to configure an operator to manage the service cert controllers Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.26. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
[ "More specifically, given an OperatorPKI with <name>, the CNO will manage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/operator-apis
Chapter 3. Using the heat service for autoscaling
Chapter 3. Using the heat service for autoscaling After you deploy the services required to provide autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage instances for autoscaling. Prerequisites A deployed overcloud. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" . Procedure Section 3.1, "Creating the generic archive policy for autoscaling" Section 3.2, "Configuring a heat template for automatically scaling instances" Section 3.3, "Preparing the standalone deployment for autoscaling" Section 3.4, "Creating the stack deployment for autoscaling" 3.1. Creating the generic archive policy for autoscaling After you deploy the services for autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage the instances for autoscaling. Prerequisites You have deployed an overcloud that has autoscaling services. For more information, see Section 2.1, "Configuring the overcloud for autoscaling" . Procedure Log in to your environment as the stack user. For standalone environments, set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments source the stackrc file: [stack@undercloud ~]USD source ~/stackrc Create the archive policy defined in USDHOME/templates/autoscaling/parameters-autoscaling.yaml : USD openstack metric archive-policy create generic \ --back-window 0 \ --definition timespan:'4:00:00',granularity:'0:01:00',points:240 \ --aggregation-method 'rate:mean' \ --aggregation-method 'mean' Verification Verify that the archive policy was created: USD openstack metric archive-policy show generic +---------------------+--------------------------------------------------------+ | Field | Value | +---------------------+--------------------------------------------------------+ | aggregation_methods | mean, rate:mean | | back_window | 0 | | definition | - timespan: 4:00:00, granularity: 0:01:00, points: 240 | | name | generic | +---------------------+--------------------------------------------------------+ 3.2. Configuring a heat template for automatically scaling instances You can configure an Orchestration service (heat) template to create the instances, and configure alarms that create and scale instances when triggered. Note This procedure uses example values that you must change to suit your environment. Prerequisites You have deployed the overcloud with the autoscaling services. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" . You have configured the overcloud with an archive policy for autoscaling telemetry storage. For more information, see Section 3.1, "Creating the generic archive policy for autoscaling" . Procedure Log in to your environment as the stack user. USD source ~/stackrc Create a directory to hold the instance configuration for the autoscaling group: USD mkdir -p USDHOME/templates/autoscaling/vnf/ Create an instance configuration template, for example, USDHOME/templates/autoscaling/vnf/instance.yaml . Add the following configuration to your instance.yaml file: cat <<EOF > USDHOME/templates/autoscaling/vnf/instance.yaml heat_template_version: wallaby description: Template to control scaling of VNF instance parameters: metadata: type: json image: type: string description: image used to create instance default: fedora36 flavor: type: string description: instance flavor to be used default: m1.small key_name: type: string description: keypair to be used default: default network: type: string description: project network to attach instance to default: private external_network: type: string description: network used for floating IPs default: public resources: vnf: type: OS::Nova::Server properties: flavor: {get_param: flavor} key_name: {get_param: key_name} image: { get_param: image } metadata: { get_param: metadata } networks: - port: { get_resource: port } port: type: OS::Neutron::Port properties: network: {get_param: network} security_groups: - basic floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network } floating_ip_assoc: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: port } EOF The parameters parameter defines the custom parameters for this new resource. The vnf sub-parameter of the resources parameter defines the name of the custom sub-resource referred to in the OS::Heat::AutoScalingGroup , for example, OS::Nova::Server::VNF . Create the resource to reference in the heat template: USD cat <<EOF > USDHOME/templates/autoscaling/vnf/resources.yaml resource_registry: "OS::Nova::Server::VNF": USDHOME/templates/autoscaling/vnf/instance.yaml EOF Create the deployment template for heat to control instance scaling: USD cat <<EOF > USDHOME/templates/autoscaling/vnf/template.yaml heat_template_version: wallaby description: Example auto scale group, policy and alarm resources: scaleup_group: type: OS::Heat::AutoScalingGroup properties: max_size: 3 min_size: 1 #desired_capacity: 1 resource: type: OS::Nova::Server::VNF properties: metadata: {"metering.server_group": {get_param: "OS::stack_id"}} scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: scaleup_group } cooldown: 60 scaling_adjustment: 1 scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: scaleup_group } cooldown: 60 scaling_adjustment: -1 cpu_alarm_high: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale up instance if CPU > 50% metric: cpu aggregation_method: rate:mean granularity: 60 evaluation_periods: 3 threshold: 60000000000.0 resource_type: instance comparison_operator: gt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaleup_policy, signal_url]} query: list_join: - '' - - {'=': {server_group: {get_param: "OS::stack_id"}}} cpu_alarm_low: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale down instance if CPU < 20% metric: cpu aggregation_method: rate:mean granularity: 60 evaluation_periods: 3 threshold: 24000000000.0 resource_type: instance comparison_operator: lt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaledown_policy, signal_url]} query: list_join: - '' - - {'=': {server_group: {get_param: "OS::stack_id"}}} outputs: scaleup_policy_signal_url: value: {get_attr: [scaleup_policy, alarm_url]} scaledown_policy_signal_url: value: {get_attr: [scaledown_policy, alarm_url]} EOF Note Outputs on the stack are informational and are not referenced in the ScalingPolicy or AutoScalingGroup. To view the outputs, use the openstack stack show <stack_name> command. 3.3. Preparing the standalone deployment for autoscaling To test the deployment of a stack for an autoscaled instance in a pre-production environment, you can deploy the stack by using a standalone deployment. You can use this procedure to test the deployment with a standalone environment. In a production environment, the deployment commands are different. Procedure Log in to your environment as the stack user. Set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone Configure the cloud to allow deployment of a simulated VNF workload that uses the Fedora 36 cloud image with attached private and public network interfaces. This example is a working configuration that uses a standalone deployment: USD export GATEWAY=192.168.25.1 USD export STANDALONE_HOST=192.168.25.2 USD export PUBLIC_NETWORK_CIDR=192.168.25.0/24 USD export PRIVATE_NETWORK_CIDR=192.168.100.0/24 USD export PUBLIC_NET_START=192.168.25.3 USD export PUBLIC_NET_END=192.168.25.254 USD export DNS_SERVER=1.1.1.1 Create the flavor: USD openstack flavor create --ram 2048 --disk 10 --vcpu 2 --public m1.small Download and import the Fedora 36 x86_64 cloud image: USD curl -L 'https://download.fedoraproject.org/pub/fedora/linux/releases/36/Cloud/x86_64/images/Fedora-Cloud-Base-36-1.5.x86_64.qcow2' -o USDHOME/fedora36.qcow2 USD openstack image create fedora36 --container-format bare --disk-format qcow2 --public --file USDHOME/fedora36.qcow2 Generate and import the public key: USD ssh-keygen -f USDHOME/.ssh/id_rsa -q -N "" -t rsa -b 2048 USD openstack keypair create --public-key USDHOME/.ssh/id_rsa.pub default Create the basic security group that allows SSH, ICMP, and DNS protocols: USD openstack security group create basic USD openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0 USD openstack security group rule create --protocol icmp basic USD openstack security group rule create --protocol udp --dst-port 53:53 basic Create the external network (public): USD openstack network create --external --provider-physical-network datacentre --provider-network-type flat public Create the private network: USD openstack network create --internal private openstack subnet create public-net \ --subnet-range USDPUBLIC_NETWORK_CIDR \ --no-dhcp \ --gateway USDGATEWAY \ --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END \ --network public USD openstack subnet create private-net \ --subnet-range USDPRIVATE_NETWORK_CIDR \ --network private Create the router: USD openstack router create vrouter USD openstack router set vrouter --external-gateway public USD openstack router add subnet vrouter private-net Additional resources Standalone Deployment Guide . 3.4. Creating the stack deployment for autoscaling Create the stack deployment for the worked VNF autoscaling example. Prerequisites Section 3.2, "Configuring a heat template for automatically scaling instances" Procedure Create the stack: USD openstack stack create \ -t USDHOME/templates/autoscaling/vnf/template.yaml \ -e USDHOME/templates/autoscaling/vnf/resources.yaml \ vnf Verification Verify that the stack was created successfully: USD openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | cb082cbd-535e-4779-84b0-98925e103f5e | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+ Verify that the stack resources were created, including alarms, scaling policies, and the autoscaling group: USD export STACK_ID=USD(openstack stack show vnf -c id -f value) USD openstack stack resource list USDSTACK_ID +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | cpu_alarm_high | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_policy | 1c4446b7242e479090bef4b8075df9d4 | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | cpu_alarm_low | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_group | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup | CREATE_COMPLETE | 2022-10-06T23:08:37Z | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ Verify that an instance was launched by the stack creation: USD openstack server list --long | grep USDSTACK_ID | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | ACTIVE | None | Running | private=192.168.100.61, 192.168.25.99 | fedora36 | a6aa7b11-1b99-4c62-a43b-d0b7c77f4b72 | m1.small | 5cd46fec-50c2-43d5-89e8-ed3fa7660852 | nova | standalone-80.localdomain | metering.server_group='cb082cbd-535e-4779-84b0-98925e103f5e' | Verify that the alarms were created for the stack: List the alarm IDs. The state of the alarms might reside in the insufficient data state for a period of time. The minimal period of time is the polling interval of the data collection and data storage granularity setting: USD openstack alarm list +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+ | b9c04ef4-8b57-4730-af03-1a71c3885914 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_low-pve5eal6ykst | alarm | low | True | | d72d2e0d-1888-4f89-b888-02174c48e463 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_high-5xx7qvfsurxe | ok | low | True | +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+ List the resources for the stack and note the physical_resource_id values for the cpu_alarm_high and cpu_alarm_low resources. USD openstack stack resource list USDSTACK_ID +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | cpu_alarm_high | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_policy | 1c4446b7242e479090bef4b8075df9d4 | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | cpu_alarm_low | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_group | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup | CREATE_COMPLETE | 2022-10-06T23:08:37Z | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ The value of the physical_resource_id must match the alarm_id in the output of the openstack alarm list command. Verify that metric resources exist for the stack. Set the value of the server_group query to the stack ID: USD openstack metric resource search --sort-column launched_at -c id -c display_name -c launched_at -c deleted_at --type instance server_group="USDSTACK_ID" +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+ | id | display_name | launched_at | deleted_at | +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+ | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | 2022-10-06T23:09:28.496566+00:00 | None | +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+ Verify that measurements exist for the instance resources created through the stack: USD openstack metric aggregates --resource-type instance --sort-column timestamp '(metric cpu rate:mean)' server_group="USDSTACK_ID" +----------------------------------------------------+---------------------------+-------------+---------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+---------------+ | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:11:00+00:00 | 60.0 | 69470000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:12:00+00:00 | 60.0 | 81060000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:13:00+00:00 | 60.0 | 82840000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:14:00+00:00 | 60.0 | 66660000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:15:00+00:00 | 60.0 | 7360000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:16:00+00:00 | 60.0 | 3150000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:17:00+00:00 | 60.0 | 2760000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:18:00+00:00 | 60.0 | 3470000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:19:00+00:00 | 60.0 | 2770000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:20:00+00:00 | 60.0 | 2700000000.0 | +----------------------------------------------------+---------------------------+-------------+---------------+
[ "[stack@standalone ~]USD export OS_CLOUD=standalone", "[stack@undercloud ~]USD source ~/stackrc", "openstack metric archive-policy create generic --back-window 0 --definition timespan:'4:00:00',granularity:'0:01:00',points:240 --aggregation-method 'rate:mean' --aggregation-method 'mean'", "openstack metric archive-policy show generic +---------------------+--------------------------------------------------------+ | Field | Value | +---------------------+--------------------------------------------------------+ | aggregation_methods | mean, rate:mean | | back_window | 0 | | definition | - timespan: 4:00:00, granularity: 0:01:00, points: 240 | | name | generic | +---------------------+--------------------------------------------------------+", "source ~/stackrc", "mkdir -p USDHOME/templates/autoscaling/vnf/", "cat <<EOF > USDHOME/templates/autoscaling/vnf/instance.yaml heat_template_version: wallaby description: Template to control scaling of VNF instance parameters: metadata: type: json image: type: string description: image used to create instance default: fedora36 flavor: type: string description: instance flavor to be used default: m1.small key_name: type: string description: keypair to be used default: default network: type: string description: project network to attach instance to default: private external_network: type: string description: network used for floating IPs default: public resources: vnf: type: OS::Nova::Server properties: flavor: {get_param: flavor} key_name: {get_param: key_name} image: { get_param: image } metadata: { get_param: metadata } networks: - port: { get_resource: port } port: type: OS::Neutron::Port properties: network: {get_param: network} security_groups: - basic floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network } floating_ip_assoc: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: port } EOF", "cat <<EOF > USDHOME/templates/autoscaling/vnf/resources.yaml resource_registry: \"OS::Nova::Server::VNF\": USDHOME/templates/autoscaling/vnf/instance.yaml EOF", "cat <<EOF > USDHOME/templates/autoscaling/vnf/template.yaml heat_template_version: wallaby description: Example auto scale group, policy and alarm resources: scaleup_group: type: OS::Heat::AutoScalingGroup properties: max_size: 3 min_size: 1 #desired_capacity: 1 resource: type: OS::Nova::Server::VNF properties: metadata: {\"metering.server_group\": {get_param: \"OS::stack_id\"}} scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: scaleup_group } cooldown: 60 scaling_adjustment: 1 scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: scaleup_group } cooldown: 60 scaling_adjustment: -1 cpu_alarm_high: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale up instance if CPU > 50% metric: cpu aggregation_method: rate:mean granularity: 60 evaluation_periods: 3 threshold: 60000000000.0 resource_type: instance comparison_operator: gt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaleup_policy, signal_url]} query: list_join: - '' - - {'=': {server_group: {get_param: \"OS::stack_id\"}}} cpu_alarm_low: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Scale down instance if CPU < 20% metric: cpu aggregation_method: rate:mean granularity: 60 evaluation_periods: 3 threshold: 24000000000.0 resource_type: instance comparison_operator: lt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaledown_policy, signal_url]} query: list_join: - '' - - {'=': {server_group: {get_param: \"OS::stack_id\"}}} outputs: scaleup_policy_signal_url: value: {get_attr: [scaleup_policy, alarm_url]} scaledown_policy_signal_url: value: {get_attr: [scaledown_policy, alarm_url]} EOF", "[stack@standalone ~]USD export OS_CLOUD=standalone", "export GATEWAY=192.168.25.1 export STANDALONE_HOST=192.168.25.2 export PUBLIC_NETWORK_CIDR=192.168.25.0/24 export PRIVATE_NETWORK_CIDR=192.168.100.0/24 export PUBLIC_NET_START=192.168.25.3 export PUBLIC_NET_END=192.168.25.254 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 2048 --disk 10 --vcpu 2 --public m1.small", "curl -L 'https://download.fedoraproject.org/pub/fedora/linux/releases/36/Cloud/x86_64/images/Fedora-Cloud-Base-36-1.5.x86_64.qcow2' -o USDHOME/fedora36.qcow2", "openstack image create fedora36 --container-format bare --disk-format qcow2 --public --file USDHOME/fedora36.qcow2", "ssh-keygen -f USDHOME/.ssh/id_rsa -q -N \"\" -t rsa -b 2048", "openstack keypair create --public-key USDHOME/.ssh/id_rsa.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public", "openstack network create --internal private", "openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --no-dhcp --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public", "openstack subnet create private-net --subnet-range USDPRIVATE_NETWORK_CIDR --network private", "openstack router create vrouter", "openstack router set vrouter --external-gateway public", "openstack router add subnet vrouter private-net", "openstack stack create -t USDHOME/templates/autoscaling/vnf/template.yaml -e USDHOME/templates/autoscaling/vnf/resources.yaml vnf", "openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | cb082cbd-535e-4779-84b0-98925e103f5e | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+", "export STACK_ID=USD(openstack stack show vnf -c id -f value)", "openstack stack resource list USDSTACK_ID +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | cpu_alarm_high | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_policy | 1c4446b7242e479090bef4b8075df9d4 | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | cpu_alarm_low | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_group | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup | CREATE_COMPLETE | 2022-10-06T23:08:37Z | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+", "openstack server list --long | grep USDSTACK_ID | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | ACTIVE | None | Running | private=192.168.100.61, 192.168.25.99 | fedora36 | a6aa7b11-1b99-4c62-a43b-d0b7c77f4b72 | m1.small | 5cd46fec-50c2-43d5-89e8-ed3fa7660852 | nova | standalone-80.localdomain | metering.server_group='cb082cbd-535e-4779-84b0-98925e103f5e' |", "openstack alarm list +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+ | b9c04ef4-8b57-4730-af03-1a71c3885914 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_low-pve5eal6ykst | alarm | low | True | | d72d2e0d-1888-4f89-b888-02174c48e463 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_high-5xx7qvfsurxe | ok | low | True | +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+", "openstack stack resource list USDSTACK_ID +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+ | cpu_alarm_high | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_policy | 1c4446b7242e479090bef4b8075df9d4 | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | cpu_alarm_low | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db | OS::Heat::ScalingPolicy | CREATE_COMPLETE | 2022-10-06T23:08:37Z | | scaleup_group | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup | CREATE_COMPLETE | 2022-10-06T23:08:37Z | +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+", "openstack metric resource search --sort-column launched_at -c id -c display_name -c launched_at -c deleted_at --type instance server_group=\"USDSTACK_ID\" +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+ | id | display_name | launched_at | deleted_at | +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+ | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | 2022-10-06T23:09:28.496566+00:00 | None | +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+", "openstack metric aggregates --resource-type instance --sort-column timestamp '(metric cpu rate:mean)' server_group=\"USDSTACK_ID\" +----------------------------------------------------+---------------------------+-------------+---------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+---------------+ | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:11:00+00:00 | 60.0 | 69470000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:12:00+00:00 | 60.0 | 81060000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:13:00+00:00 | 60.0 | 82840000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:14:00+00:00 | 60.0 | 66660000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:15:00+00:00 | 60.0 | 7360000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:16:00+00:00 | 60.0 | 3150000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:17:00+00:00 | 60.0 | 2760000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:18:00+00:00 | 60.0 | 3470000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:19:00+00:00 | 60.0 | 2770000000.0 | | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:20:00+00:00 | 60.0 | 2700000000.0 | +----------------------------------------------------+---------------------------+-------------+---------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/autoscaling_for_instances/assembly-using-the-heat-service-for-autoscaling_assembly-using-the-heat-service-for-autoscaling
Appendix B. Revision history
Appendix B. Revision history 0.1-5 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue Jira:RHELDOCS-19603 (IdM SSSD) 0.1-4 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.1-3 Fri August 9 2024, Brian Angelica ( [email protected] ) Added a Known Issue RHEL-11397 (Installer and image creation) 0.1-2 Fri June 7 2024, Brian Angelica ( [email protected] ) Updated a Known Issue in Jira:RHELDOCS-17954 (Red Hat Enterprise Linux System Roles). 0.1-1 Fri May 10 2024, Brian Angelica ( [email protected] ) Updated Tech Preview in BZ#1690207 . 0.1-0 Thu May 9 2024, Gabriela Fialova ( [email protected] ) Updated a known issue BZ#1730502 (Storage). 0.0-9 Mon April 29 2024, Gabriela Fialova ( [email protected] ) Added an enhancement BZ#2093355 (Security). 0.0-8 Mon March 4 2024, Lucie Varakova ( [email protected] ) Added a bug fix Jira:SSSD-6096 (Identity Management). 0.0-7 Thu February 29 2024, Lucie Varakova ( [email protected] ) Added a deprecated functionality Jira:RHELDOCS-17641 (Networking). 0.0-6 Tue February 13 2024, Lucie Varakova ( [email protected] ) Added a deprecated functionality Jira:RHELDOCS-17573 (Identity Management). 0.0-5 Fri February 2 2024, Lucie Varakova ( [email protected] ) Added a known issue BZ#1834716 (Security). Updated the Jira:RHELDOCS-16755 deprecated functionality note (Containers). 0.0-4 Fri January 19 2024, Lucie Varakova ( [email protected] ) Added an enhancement related to Python Jira:RHELDOCS-17369 (Dynamic programming languages, web and database servers). Added an enhancement Jira:RHELDOCS-16367 (The web console). 0.0-3 Wed January 10 2024, Lucie Varakova ( [email protected] ) Added a rebase BZ#2196425 (Identity Management). Updated the Jira:RHELPLAN-156196 new feature description (Supportability). Added deprecated functionality Jira:RHELDOCS-17380 (Security). Other minor updates. 0.0-2 Thu November 16 2023, Lenka Spackova ( [email protected] ) Node.js 20 is now fully supported ( BZ#2186718 ). 0.0-1 Wed November 15 2023, Lucie Varakova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.9 Release Notes. 0.0-0 Wed September 27 2023, Lucie Varakova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.9 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/revision_history
probe::scheduler.migrate
probe::scheduler.migrate Name probe::scheduler.migrate - Task migrating across cpus Synopsis scheduler.migrate Values priority priority of the task being migrated cpu_to the destination cpu cpu_from the original cpu task the process that is being migrated name name of the probe point pid PID of the task being migrated
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-migrate
About
About Red Hat OpenShift Service on AWS 4 OpenShift Service on AWS Documentation. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/about/index
Chapter 5. AlertingRule [monitoring.openshift.io/v1]
Chapter 5. AlertingRule [monitoring.openshift.io/v1] Description AlertingRule represents a set of user-defined Prometheus rule groups containing alerting rules. This resource is the supported method for cluster admins to create alerts based on metrics recorded by the platform monitoring stack in OpenShift, i.e. the Prometheus instance deployed to the openshift-monitoring namespace. You might use this to create custom alerting rules not shipped with OpenShift based on metrics from components such as the node_exporter, which provides machine-level metrics such as CPU usage, or kube-state-metrics, which provides metrics on Kubernetes usage. The API is mostly compatible with the upstream PrometheusRule type from the prometheus-operator. The primary difference being that recording rules are not allowed here - only alerting rules. For each AlertingRule resource created, a corresponding PrometheusRule will be created in the openshift-monitoring namespace. OpenShift requires admins to use the AlertingRule resource rather than the upstream type in order to allow better OpenShift specific defaulting and validation, while not modifying the upstream APIs directly. You can find upstream API documentation for PrometheusRule resources here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the desired state of this AlertingRule object. status object status describes the current state of this AlertOverrides object. 5.1.1. .spec Description spec describes the desired state of this AlertingRule object. Type object Required groups Property Type Description groups array groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. groups[] object RuleGroup is a list of sequentially evaluated alerting rules. 5.1.2. .spec.groups Description groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. Type array 5.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated alerting rules. Type object Required name rules Property Type Description interval string interval is how often rules in the group are evaluated. If not specified, it defaults to the global.evaluation_interval configured in Prometheus, which itself defaults to 30 seconds. You can check if this value has been modified from the default on your cluster by inspecting the platform Prometheus configuration: The relevant field in that resource is: spec.evaluationInterval name string name is the name of the group. rules array rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. rules[] object Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules 5.1.4. .spec.groups[].rules Description rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. Type array 5.1.5. .spec.groups[].rules[] Description Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules Type object Required alert expr Property Type Description alert string alert is the name of the alert. Must be a valid label value, i.e. may contain any Unicode character. annotations object (string) annotations to add to each alert. These are values that can be used to store longer additional information that you won't query on, such as alert descriptions or runbook links. expr integer-or-string expr is the PromQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending or firing alerts. This is most often a string representing a PromQL expression, e.g.: mapi_current_pending_csr > mapi_max_pending_csr In rare cases this could be a simple integer, e.g. a simple "1" if the intent is to create an alert that is always firing. This is sometimes used to create an always-firing "Watchdog" alert in order to ensure the alerting pipeline is functional. for string for is the time period after which alerts are considered firing after first returning results. Alerts which have not yet fired for long enough are considered pending. labels object (string) labels to add or overwrite for each alert. The results of the PromQL expression for the alert will result in an existing set of labels for the alert, after evaluating the expression, for any label specified here with the same name as a label in that set, the label here wins and overwrites the value. These should typically be short identifying values that may be useful to query against. A common example is the alert severity, where one sets severity: warning under the labels key: 5.1.6. .status Description status describes the current state of this AlertOverrides object. Type object Property Type Description observedGeneration integer observedGeneration is the last generation change you've dealt with. prometheusRule object prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. 5.1.7. .status.prometheusRule Description prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. Type object Required name Property Type Description name string name of the referenced PrometheusRule. 5.2. API endpoints The following API endpoints are available: /apis/monitoring.openshift.io/v1/alertingrules GET : list objects of kind AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules DELETE : delete collection of AlertingRule GET : list objects of kind AlertingRule POST : create an AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} DELETE : delete an AlertingRule GET : read the specified AlertingRule PATCH : partially update the specified AlertingRule PUT : replace the specified AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status GET : read status of the specified AlertingRule PATCH : partially update status of the specified AlertingRule PUT : replace status of the specified AlertingRule 5.2.1. /apis/monitoring.openshift.io/v1/alertingrules HTTP method GET Description list objects of kind AlertingRule Table 5.1. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty 5.2.2. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules HTTP method DELETE Description delete collection of AlertingRule Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertingRule Table 5.3. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertingRule Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body AlertingRule schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 202 - Accepted AlertingRule schema 401 - Unauthorized Empty 5.2.3. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} Table 5.7. Global path parameters Parameter Type Description name string name of the AlertingRule HTTP method DELETE Description delete an AlertingRule Table 5.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertingRule Table 5.10. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertingRule Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.12. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertingRule Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.14. Body parameters Parameter Type Description body AlertingRule schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty 5.2.4. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status Table 5.16. Global path parameters Parameter Type Description name string name of the AlertingRule HTTP method GET Description read status of the specified AlertingRule Table 5.17. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AlertingRule Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.19. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AlertingRule Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.21. Body parameters Parameter Type Description body AlertingRule schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring_apis/alertingrule-monitoring-openshift-io-v1
Chapter 14. Fixed issues in Red Hat Decision Manager 7.13.4
Chapter 14. Fixed issues in Red Hat Decision Manager 7.13.4 Red Hat Decision Manager 7.13.4 provides increased stability and fixed issues listed in this section. 14.1. Business Central Dashbuilder does not support the type MILLISECOND [ RHPAM-4659 ] Standalone Business Central does not start on Red Hat Enterprise Linux (RHEL) [ RHPAM-4715 ] Data filter is not working as expected [ RHPAM-4664 ] 14.2. KIE Server The ClassNotFoundException occurs when launching a business application with the command mvn spring-boot:run [ RHDM-1984 ] Behavior of the ClassCastException exception changed after RHPAM 7.13.0 for empty values in the request [ RHPAM-4725 ] The NoSuchMethodError occurs when retrieving Kie server information on SpringBoot [ RHPAM-4714 ] The productized cxf-rt-bindings-soap dependency in kie-camel is invalid [ RHPAM-4683 ] The NullPointerException (NPE) occurs on TupleSetsImpl.setNextTuple with SubnetworkTuple [ RHDM-1968 ] 14.3. Red Hat OpenShift Container Platform Upgrade JBoss Enterprise Application Server to 7.4.12 on RHPAM and BAMOE images [ RHPAM-4762 ] Unable to set direct-verification=true individually in LDAP realm by operator [ RHPAM-4754 ] Unable to connect to an external PostgreSQL database over SSL from kie-server on OpenShift Container Platform [ RHPAM-4740 ] Do not set URL envs if the jdbcUrl property is not set [ RHPAM-4713 ] Legacy datasource scripts do not consider XA properties for the mariadb driver [ RHPAM-4712 ] Correctly set the XA Connection URL property [ RHPAM-4711 ] KIE Server configMap points to SSL routes when SSL is disabled [ RHPAM-4709 ] Can't login into Business Central without SSL configured [ RHPAM-4705 ] NoSuchMethodException : Method setURL not found [ RHPAM-4704 ] The pom.xml file in rhpam-7-openshift-image/quickstarts/router-ext contains the wrong version [ RHPAM-4682 ] 14.4. Decision engine Unnecessary warning message appears when executing a DRL file [ RHPAM-4758 ] The NullPointerException error occurs in MemoryFileSystem when kbase.name is empty in kmodule.xml [ RHPAM-4755 ] With a non-executable model and the mvel dialect, when the modify-block is placed inside a block such as the if-block in RHS, the modify does not work correctly [ RHDM-1985 ] After upgrading to 7.13.2, rules fire incorrectly when BigDecimal equality is involved in a pattern [ RHDM-1974 ] The executable model doesn't resolve bind variables from another pattern of the same type in method call in LHS for property reactivity [ RHDM-1969 ] The executable model doesn't resolve bind variables in a method call in LHS for property reactivity [ RHDM-1967 ]
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/rn-7.14.3-fixed-issues-ref_release-notes
Chapter 5. Observability UI plugins
Chapter 5. Observability UI plugins 5.1. Observability UI plugins overview You can use the Cluster Observability Operator (COO) to install and manage UI plugins to enhance the observability capabilities of the OpenShift Container Platform web console. The plugins extend the default functionality, providing new UI features for troubleshooting, distributed tracing, and cluster logging. 5.1.1. Cluster logging The logging UI plugin surfaces logging data in the web console on the Observe Logs page. You can specify filters, queries, time ranges and refresh rates. The results displayed a list of collapsed logs, which can then be expanded to show more detailed information for each log. For more information, see the logging UI plugin page. 5.1.2. Troubleshooting Important The Cluster Observability Operator troubleshooting panel UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The troubleshooting panel UI plugin for OpenShift Container Platform version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. You can use the troubleshooting panel available from the Observe Alerting page to easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of OpenShift Container Platform version 4.17+ can also access the troubleshooting UI panel from the Application Launcher . The output of Korrel8r is displayed as an interactive node graph. When you click on a node, you are automatically redirected to the corresponding web console page with the specific information for that node, for example, metric, log, or pod. For more information, see the troubleshooting UI plugin page. 5.1.3. Distributed tracing Important The Cluster Observability Operator distributed tracing UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The distributed tracing UI plugin adds tracing-related features to the web console on the Observe Traces page. You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. You can select a supported TempoStack or TempoMonolithic multi-tenant instance running in the cluster and set a time range and query to view the trace data. For more information, see the distributed tracing UI plugin page. 5.2. Logging UI plugin The logging UI plugin surfaces logging data in the OpenShift Container Platform web console on the Observe Logs page. You can specify filters, queries, time ranges and refresh rates, with the results displayed as a list of collapsed logs, which can then be expanded to show more detailed information for each log. When you have also deployed the Troubleshooting UI plugin on OpenShift Container Platform version 4.16+, it connects to the Korrel8r service and adds direct links from the Administration perspective, from the Observe Logs page, to the Observe Metrics page with a correlated PromQL query. It also adds a See Related Logs link from the Administration perspective alerting detail page, at Observe Alerting , to the Observe Logs page with a correlated filter set selected. The features of the plugin are categorized as: dev-console Adds the logging view to the Developer perspective. alerts Merges the web console alerts with log-based alerts defined in the Loki ruler. Adds a log-based metrics chart in the alert detail view. dev-alerts Merges the web console alerts with log-based alerts defined in the Loki ruler. Adds a log-based metrics chart in the alert detail view for the Developer perspective. For Cluster Observability Operator (COO) versions, the support for these features in OpenShift Container Platform versions is shown in the following table: COO version OCP versions Features 0.3.0+ 4.12 dev-console 0.3.0+ 4.13 dev-console , alerts 0.3.0+ 4.14+ dev-console , alerts , dev-alerts 5.2.1. Installing the Cluster Observability Operator logging UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator. You have a LokiStack instance in your cluster. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator. Choose the UI Plugin tab (at the far right of the tab list) and click Create UIPlugin . Select YAML view , enter the following content, and then click Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki logsLimit: 50 timeout: 30s 5.3. Distributed tracing UI plugin Important The Cluster Observability Operator distributed tracing UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The distributed tracing UI plugin adds tracing-related features to the Administrator perspective of the OpenShift web console at Observe Traces . You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. 5.3.1. Installing the Cluster Observability Operator distributed tracing UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator Choose the UI Plugin tab (at the far right of the tab list) and press Create UIPlugin Select YAML view , enter the following content, and then press Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: distributed-tracing spec: type: DistributedTracing 5.3.2. Using the Cluster Observability Operator distributed tracing UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator. You have installed the Cluster Observability Operator distributed tracing UI plugin. You have a TempoStack or TempoMonolithic multi-tenant instance in the cluster. Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Observe Traces . Select a TempoStack or TempoMonolithic multi-tenant instance and set a time range and query for the traces to be loaded. The traces are displayed on a scatter-plot showing the trace start time, duration, and number of spans. Underneath the scatter plot, there is a list of traces showing information such as the Trace Name , number of Spans , and Duration . Click on a trace name link. The trace detail page for the selected trace contains a Gantt Chart of all of the spans within the trace. Select a span to show a breakdown of the configured attributes. 5.4. Troubleshooting UI plugin Important The Cluster Observability Operator troubleshooting panel UI plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The troubleshooting UI plugin for OpenShift Container Platform version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. With the troubleshooting panel that is available under Observe Alerting , you can easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of OpenShift Container Platform version 4.17+ can also access the troubleshooting UI panel from the Application Launcher . When you install the troubleshooting UI plugin, a Korrel8r service named korrel8r is deployed in the same namespace, and it is able to locate related observability signals and Kubernetes resources from its correlation engine. The output of Korrel8r is displayed in the form of an interactive node graph in the OpenShift Container Platform web console. Nodes in the graph represent a type of resource or signal, while edges represent relationships. When you click on a node, you are automatically redirected to the corresponding web console page with the specific information for that node, for example, metric, log, pod. 5.4.1. Installing the Cluster Observability Operator Troubleshooting UI plugin Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. You have installed the Cluster Observability Operator Procedure In the OpenShift Container Platform web console, click Operators Installed Operators and select Cluster Observability Operator Choose the UI Plugin tab (at the far right of the tab list) and press Create UIPlugin Select YAML view , enter the following content, and then press Create : apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: troubleshooting-panel spec: type: TroubleshootingPanel 5.4.2. Using the Cluster Observability Operator troubleshooting UI plugin Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin cluster role. If your cluster version is 4.17+, you can access the troubleshooting UI panel from the Application Launcher . You have logged in to the OpenShift Container Platform web console. You have installed OpenShift Container Platform Logging, if you want to visualize correlated logs. You have installed OpenShift Container Platform Network Observability, if you want to visualize correlated netflows. You have installed the Cluster Observability Operator. You have installed the Cluster Observability Operator troubleshooting UI plugin. Note The troubleshooting panel relies on the observability signal stores installed in your cluster. Kuberenetes resources, alerts and metrics are always available by default in an OpenShift Container Platform cluster. Other signal types require optional components to be installed: Logs: Red Hat Openshift Logging (collection) and Loki Operator provided by Red Hat (store) Network events: Network observability provided by Red Hat (collection) and Loki Operator provided by Red Hat (store) Procedure In the admin perspective of the web console, navigate to Observe Alerting and then select an alert. If the alert has correlated items, a Troubleshooting Panel link will appear above the chart on the alert detail page. Click on the Troubleshooting Panel link to display the panel. The panel consists of query details and a topology graph of the query results. The selected alert is converted into a Korrel8r query string and sent to the korrel8r service. The results are displayed as a graph network connecting the returned signals and resources. This is a neighbourhood graph, starting at the current resource and including related objects up to 3 steps away from the starting point. Clicking on nodes in the graph takes you to the corresponding web console pages for those resouces. You can use the troubleshooting panel to find resources relating to the chosen alert. Note Clicking on a node may sometimes show fewer results than indicated on the graph. This is a known issue that will be addressed in a future release. Alert (1): This node is the starting point in the graph and represents the KubeContainerWaiting alert displayed in the web console. Pod (1): This node indicates that there is a single Pod resource associated with this alert. Clicking on this node will open a console search showing the related pod directly. Event (2): There are two Kuberenetes events associated with the pod. Click this node to see the events. Logs (74): This pod has 74 lines of logs, which you can access by clicking on this node. Metrics (105): There are many metrics associated with the pod. Network (6): There are network events, meaning the pod has communicated over the network. The remaining nodes in the graph represent the Service , Deployment and DaemonSet resources that the pod has communicated with. Focus: Clicking this button updates the graph. By default, the graph itself does not change when you click on nodes in the graph. Instead, the main web console page changes, and you can then navigate to other resources using links on the page, while the troubleshooting panel itself stays open and unchanged. To force an update to the graph in the troubleshooting panel, click Focus . This draws a new graph, using the current resource in the web console as the starting point. Show Query: Clicking this button enables some experimental features: Hide Query hides the experimental features. The query that identifies the starting point for the graph. The query language, part of the Korrel8r correlation engine used to create the graphs, is experimental and may change in future. The query is updated by the Focus button to correspond to the resources in the main web console window. Neighbourhood depth is used to display a smaller or larger neighbourhood. Note Setting a large value in a large cluster might cause the query to fail, if the number of results is too big. Goal class results in a goal directed search instead of a neighbourhood search. A goal directed search shows all paths from the starting point to the goal class, which indicates a type of resource or signal. The format of the goal class is experimental and may change. Currently, the following goals are valid: k8s: RESOURCE[VERSION.[GROUP]] identifying a kind of kuberenetes resource. For example k8s:Pod or k8s:Deployment.apps.v1 . alert:alert representing any alert. metric:metric representing any metric. netflow:network representing any network observability network event. log: LOG_TYPE representing stored logs, where LOG_TYPE must be one of application , infrastructure or audit . 5.4.3. Creating the example alert To trigger an alert as a starting point to use in the troubleshooting UI panel, you can deploy a container that is deliberately misconfigured. Procedure Use the following YAML, either from the command line or in the web console, to create a broken deployment in a system namespace: apiVersion: apps/v1 kind: Deployment metadata: name: bad-deployment namespace: default 1 spec: selector: matchLabels: app: bad-deployment template: metadata: labels: app: bad-deployment spec: containers: 2 - name: bad-deployment image: quay.io/openshift-logging/vector:5.8 1 The deployment must be in a system namespace (such as default ) to cause the desired alerts. 2 This container deliberately tries to start a vector server with no configuration file. The server logs a few messages, and then exits with an error. Alternatively, you can deploy any container you like that is badly configured, causing it to trigger an alert. View the alerts: Go to Observe Alerting and click clear all filters . View the Pending alerts. Important Alerts first appear in the Pending state. They do not start Firing until the container has been crashing for some time. By viewing Pending alerts, you do not have to wait as long to see them occur. Choose one of the KubeContainerWaiting , KubePodCrashLooping , or KubePodNotReady alerts and open the troubleshooting panel by clicking on the link. Alternatively, if the panel is already open, click the "Focus" button to update the graph.
[ "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki logsLimit: 50 timeout: 30s", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: distributed-tracing spec: type: DistributedTracing", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: troubleshooting-panel spec: type: TroubleshootingPanel", "apiVersion: apps/v1 kind: Deployment metadata: name: bad-deployment namespace: default 1 spec: selector: matchLabels: app: bad-deployment template: metadata: labels: app: bad-deployment spec: containers: 2 - name: bad-deployment image: quay.io/openshift-logging/vector:5.8" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cluster_observability_operator/observability-ui-plugins
6.6. Resource Operations
6.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 6.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 6.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval If set to a nonzero value, a recurring operation is created that repeats at this frequency, in seconds. A nonzero value makes sense only when the action name is set to monitor . A recurring monitor action will be executed immediately after a resource start completes, and subsequent monitor actions are scheduled starting at the time the monitor action completed. For example, if a monitor action with interval=20s is executed at 01:00:00, the monitor action does not occur at 01:00:20, but at 20 seconds after the first monitor action completes. If set to zero, which is the default value, this parameter allows you to provide values to be used for operations created by the cluster. For example, if the interval is set to zero, the name of the operation is set to start , and the timeout value is set to 40, then Pacemaker will use a timeout of 40 seconds when starting this resource. A monitor operation with a zero interval allows you to set the timeout / on-fail / enabled values for the probes that Pacemaker does at startup to get the current status of all resources when the defaults are not desirable. timeout If the operation does not complete in the amount of time set by this parameter, abort the operation and consider it failed. The default value is the value of timeout if set with the pcs resource op defaults command, or 20 seconds if it is not set. If you find that your system includes a resource that requires more time than the system allows to perform an operation (such as start , stop , or monitor ), investigate the cause and if the lengthy execution time is expected you can increase this value. The timeout value is not a delay of any kind, nor does the cluster wait the entire timeout period if the operation returns before the timeout period has completed. on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false 6.6.1. Configuring Resource Operations You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you can update the resource. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. To change the stop timeout operation, execute the following command. Note When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. 6.6.2. Configuring Global Resource Operation Defaults You can use the following command to set global default values for monitoring operations. For example, the following command sets a global default of a timeout value of 240 seconds for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240 seconds. Note that a cluster resource will use the global default only when the option is not specified in the cluster resource definition. By default, resource agents define the timeout option for all operations. For the global operation timeout value to be honored, you must create the cluster resource without the timeout option explicitly or you must remove the timeout option by updating the cluster resource, as in the following command. For example, after setting a global default of a timeout value of 240 seconds for all monitoring operations and updating the cluster resource VirtualIP to remove the timeout value for the monitor operation, the resource VirtualIP will then have timeout values for start , stop , and monitor operations of 20s, 40s and 240s, respectively. The global default value for timeout operations is applied here only on the monitor operation, where the default timeout option was removed by the command.
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults [ options ]", "pcs resource op defaults timeout=240s", "pcs resource op defaults timeout: 240s", "pcs resource update VirtualIP op monitor interval=10s", "pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resourceoperate-HAAR
Chapter 28. Using snapshots on Stratis file systems
Chapter 28. Using snapshots on Stratis file systems You can use snapshots on Stratis file systems to capture file system state at arbitrary times and restore it in the future. 28.1. Characteristics of Stratis snapshots In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system. The current snapshot implementation in Stratis is characterized by the following: A snapshot of a file system is another file system. A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than the file system it was created from. A file system does not have to be mounted to create a snapshot from it. Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the XFS log. 28.2. Creating a Stratis snapshot You can create a Stratis file system as a snapshot of an existing Stratis file system. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure Create a Stratis snapshot: Additional resources stratis(8) man page on your system 28.3. Accessing the content of a Stratis snapshot You can mount a snapshot of a Stratis file system to make it accessible for read and write operations. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis file system . Procedure To access the snapshot, mount it as a regular file system from the /dev/stratis/ my-pool / directory: Additional resources Mounting a Stratis file system mount(8) man page on your system 28.4. Reverting a Stratis file system to a snapshot You can revert the content of a Stratis file system to the state captured in a Stratis snapshot. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Optional: Back up the current state of the file system to be able to access it later: Unmount and remove the original file system: Create a copy of the snapshot under the name of the original file system: Mount the snapshot, which is now accessible with the same name as the original file system: The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot . Additional resources stratis(8) man page on your system 28.5. Removing a Stratis snapshot You can remove a Stratis snapshot from a pool. Data on the snapshot are lost. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Unmount the snapshot: Destroy the snapshot: Additional resources stratis(8) man page on your system
[ "stratis fs snapshot my-pool my-fs my-fs-snapshot", "mount /dev/stratis/ my-pool / my-fs-snapshot mount-point", "stratis filesystem snapshot my-pool my-fs my-fs-backup", "umount /dev/stratis/ my-pool / my-fs stratis filesystem destroy my-pool my-fs", "stratis filesystem snapshot my-pool my-fs-snapshot my-fs", "mount /dev/stratis/ my-pool / my-fs mount-point", "umount /dev/stratis/ my-pool / my-fs-snapshot", "stratis filesystem destroy my-pool my-fs-snapshot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/using-snapshots-on-stratis-file-systems_managing-file-systems
Chapter 2. Considerations for implementing the Load-balancing service
Chapter 2. Considerations for implementing the Load-balancing service You must make several decisions when you plan to deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) such as choosing which provider to use or whether to implement a highly available environment: Section 2.1, "Load-balancing service provider drivers" Section 2.2, "Load-balancing service (octavia) feature support matrix" Section 2.3, "Load-balancing service software requirements" Section 2.4, "Load-balancing service prerequisites for the undercloud" Section 2.5, "Basics of active-standby topology for Load-balancing service instances" Section 2.6, "Post-deployment steps for the Load-balancing service" 2.1. Load-balancing service provider drivers The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) supports enabling multiple provider drivers by using the Octavia v2 API. You can choose to use one provider driver, or multiple provider drivers simultaneously. RHOSP provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora, the default, is a highly available load balancer with a feature set that scales with your compute environment. Because of this, amphora is suited for large-scale deployments. The OVN load-balancing provider is a lightweight load balancer with a basic feature set. OVN is typical for east-west, layer 4 network traffic. OVN provisions quickly and consumes fewer resources than a full-featured load-balancing provider such as amphora. On RHOSP deployments that use the neutron Modular Layer 2 plug-in with the OVN mechanism driver (ML2/OVN), RHOSP director automatically enables the OVN provider driver in the Load-balancing service without the need for additional installation or configuration. Important The information in this section applies only to the amphora load-balancing provider, unless indicated otherwise. Additional resources Section 2.2, "Load-balancing service (octavia) feature support matrix" 2.2. Load-balancing service (octavia) feature support matrix The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora is a full-featured load-balancing provider that requires a separate haproxy VM and an extra latency hop. OVN runs on every node and does not require a separate VM nor an extra hop. However, OVN has far fewer load-balancing features than amphora. The following table lists features in octavia that Red Hat OpenStack Platform (RHOSP) 16.2 supports and in which maintenance release support for the feature began. Note If the feature is not listed, then RHOSP 16.2 does not support the feature. Table 2.1. Load-balancing service (octavia) feature support matrix Feature Support level in RHOSP 16.2 Amphora Provider OVN Provider ML2/OVS L3 HA Full support No support ML2/OVS DVR Full support No support ML2/OVS L3 HA + composable network node [1] Full support No support ML2/OVS DVR + composable network node [1] Full support No support ML2/OVN L3 HA Full support Full support ML2/OVN DVR Full support Full support DPDK No support No support SR-IOV No support No support Health monitors Full support No support Amphora active-standby Full support- 16.1 and later No support Terminated HTTPS load balancers (with barbican) Full support- 16.0.1 and later No support Amphora spare pool Technology Preview only No support UDP Full support- 16.1 and later Full support Backup members Technology Preview only No support Provider framework Technology Preview only No support TLS client authentication Technology Preview only No support TLS back end encryption Technology Preview only No support Octavia flavors Full support No support Object tags - API only Full support- 16.1 and later No support Listener API timeouts Full support No support Log offloading Full support- 16.1.2 and later No support VIP access control list Full support No support Volume-based amphora No support No support [1] Network node with OVS, metadata, DHCP, L3, and Octavia (worker, health monitor, and housekeeping). Additional resources Section 2.1, "Load-balancing service provider drivers" 2.3. Load-balancing service software requirements The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) requires that you configure the following core OpenStack components: Compute (nova) OpenStack Networking (neutron) Image (glance) Identity (keystone) RabbitMQ MySQL 2.4. Load-balancing service prerequisites for the undercloud The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) has the following requirements for the RHOSP undercloud: A successful undercloud installation. The Load-balancing service present on the undercloud. A container-based overcloud deployment plan. Load-balancing service components configured on your Controller nodes. Important If you want to enable the Load-balancing service on an existing overcloud deployment, you must prepare the undercloud. Failure to do so results in the overcloud installation being reported as successful yet without the Load-balancing service running. To prepare the undercloud, see the Transitioning to Containerized Services guide. 2.5. Basics of active-standby topology for Load-balancing service instances When you deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia), you can decide whether, by default, load balancers are highly available when users create them. If you want to give users a choice, then after RHOSP deployment, create a Load-balancing service flavor for creating highly available load balancers and a flavor for creating standalone load balancers. By default, the amphora provider driver is configured for a single Load-balancing service (amphora) instance topology with limited support for high availability (HA). However, you can make Load-balancing service instances highly available when you implement an active-standby topology. In this topology, the Load-balancing service boots an active and standby instance for each load balancer, and maintains session persistence between each. If the active instance becomes unhealthy, the instance automatically fails over to the standby instance, making it active. The Load-balancing service health manager automatically rebuilds an instance that fails. Additional resources Section 4.2, "Enabling active-standby topology for Load-balancing service instances" 2.6. Post-deployment steps for the Load-balancing service Red Hat OpenStack Platform (RHOSP) provides a workflow task to simplify the post-deployment steps for the Load-balancing service (octavia). This workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment: Configure certificates and keys. Configure the load-balancing management network between the amphorae and the Load-balancing service Controller worker and health manager. Amphora image On pre-provisioned servers, you must install the amphora image on the undercloud before you deploy the Load-balancing service: On servers that are not pre-provisioned, RHOSP director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures the Load-balancing service to use this amphora image. During a stack update or upgrade, director updates this image to the latest amphora image. Note Custom amphora images are not supported. Additional resources Section 4.1, "Deploying the Load-balancing service"
[ "sudo dnf install octavia-amphora-image-x86_64.noarch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/plan-lb-service_rhosp-lbaas
Installing on Alibaba Cloud
Installing on Alibaba Cloud OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_alibaba_cloud/index