title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 8. System Registration Terms and Concepts | Chapter 8. System Registration Terms and Concepts The following list contains terms and concepts related to Red Hat tools and processes for registration, subscription management, and system management. access-based subscription model A subscription model enabled with the simple content access capability, through which access to subscription content is provided by the existence of a valid subscription and registration of the system. capacity The upper limit of usage for a subscription, expressed in the defined unit of measurement for a subscription. content Software product code and errata that is designed to be consumed on systems. Content can either be installed directly on systems or used with as-a-service delivery methods. content delivery network (CDN) A geographically distributed series of static webservers that contain subscription content and errata that is consumed by systems. The content can be consumed directly through subscription management tools such as Red Hat Subscription Management or through mirroring tools such as Red Hat Satellite. entitlement In the deprecated entitlement-based subscription model, one of a pre-defined number of allowances that is used during the registration process to assign, or attach, a subcription to a system. The entitlement-based subscription model is now superseded by the access-based subscription model of simple content access. entitlement-based subscription model A deprecated subscription model through which subscriptions are required to be attached on a per-system basis before access to subscription content is allowed. identity certificate Used by the system to authenticate to the subscription service to periodically check for updates. It is created when the system is registered. manifest A set of encrypted files containing subscription information that enables you to find, access, synchronize, and download content from the correct repositories for use in Red Hat Satellite Server organizations that are managed by a Satellite server. organization A customer entity that interacts with Red Hat. An organization is typically a company or a part of a company, such as a function, division, department, or some other grouping that is meaningful to that company. organization ID A unique numeric identifier for a customer's Red Hat organization that is used in certain internal subscription management functions. This identifier is separate from the Red Hat account number that is associated with your organization. It is located on the Hybrid Cloud Console Activation Keys page. Red Hat account A set of credentials that is used to identify and authenticate a user to Red Hat. This account enables a user to log into Red Hat properties such as the Customer Portal and the Hybrid Cloud Console. Also referred to as a Red Hat login. A Red Hat account can be a member of the corporate account that is used by a corporation or part of a corporation, enabling a list of users, such as system administrators, purchasing agents, IT management, and so on, to centrally purchase subscriptions and administer systems. A Red Hat account can also be a personal account for a single user to purchase their own subscriptions and administer their own systems. Red Hat account number A unique numeric identifier associated with your Red Hat account. Red Hat Satellite A system management solution that allows you to deploy, configure and maintain your systems across physical, virtual, and cloud environments. Red Hat Subscription Management A collection of tools available from several locations, including the subscription-manager command and options available from the Subscriptions menu of the Red Hat Hybrid Cloud Console. The subscription management tools provide views and functions that include subscription inventory, expiration, renewal, system registration, and others. Red Hat Satellite Server A server that synchronizes content, including software packages, errata, and container images, from the Red Hat Customer Portal and other supported content sources. Satellite Server also provides life cycle management, access control, and subscription management functions. Satellite organization A Satellite-specific construct that is used to divide resources into logical groups based on ownership, purpose, content, security level, and so on. These Satellite organizations can be used to isolate content for groups of systems with common requirements. Red Hat Satellite Capsule Server A server that mirrors content from the Satellite Server to enable content federation across various geographical locations. registration The process by which you officially redeem your purchase of Red Hat software and services. remote host configuration (rhc) A tool that enables system registration to subscription management tools, configuration management for the Red Hat Enterprise Linux connections to Red Hat Insights and management of Insights remediation tasks. It is not a replacement for the insights-client or subscription-manager . repository A storage system for a collection of content. Repositories are organizational structures for software product content and errata in the Red Hat content delivery network. Satellite organization A Satellite-specific construct that is used to divide resources into logical groups based on ownership, purpose, content, security level, and so on. These Satellite organizations can be used to isolate content for groups of systems with common requirements. simple content access (SCA) A capability within Red Hat Satellite and Red Hat Subscription Management on the Customer Portal, used to enable access to subscription content. If a valid subscription exists, then registering a system grants access to that content. The preferred method to register content instead of the deprecated entitlement-based subscription model. system A physical or virtual machine. subscription A contract between Red Hat and a customer for a specified term that provides access to content, support, and the knowledge base. usage The measurement of the consumption of Red Hat products installed on physical hardware or its equivalent, measured with a unit of measurement that is defined within the terms of a subscription. utilization The percentage of the maximum capacity for a subscription that is exhausted by the usage of that subscription. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/ref-reg-rhel-glossary_ |
Chapter 124. KafkaBridge schema reference | Chapter 124. KafkaBridge schema reference Property Property type Description spec KafkaBridgeSpec The specification of the Kafka Bridge. status KafkaBridgeStatus The status of the Kafka Bridge. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkabridge-reference |
Chapter 26. Extending a Stratis volume with additional block devices | Chapter 26. Extending a Stratis volume with additional block devices You can attach additional block devices to a Stratis pool to provide more storage capacity for Stratis file systems. You can do it manually or by using the web console. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 26.1. Adding block devices to a Stratis pool You can add one or more block devices to a Stratis pool. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. The block devices that you are adding to the Stratis pool are not in use and not mounted. The block devices that you are adding to the Stratis pool are at least 1 GiB in size each. Procedure To add one or more block devices to the pool, use: Additional resources stratis(8) man page on your system 26.2. Adding a block device to a Stratis pool by using the web console You can use the web console to add a block device to an existing Stratis pool. You can also add caches as a block device. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. A Stratis pool is created. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool to which you want to add a block device. On the Stratis pool page, click Add block devices and select the Tier where you want to add a block device as data or cache. If you are adding the block device to a Stratis pool that is encrypted with a passphrase, enter the passphrase. Under Block devices , select the devices you want to add to the pool. Click Add . | [
"stratis pool add-data my-pool device-1 device-2 device-n"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/extending-a-stratis-volume-with-additional-block-devices_managing-file-systems |
Chapter 4. Using VS Code Debug Adapter for Apache Camel extension | Chapter 4. Using VS Code Debug Adapter for Apache Camel extension Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . This is the Visual Studio Code extension that adds Camel Debugger power by attaching to a running Camel route written in Java, Yaml or XML DSL. 4.1. Features of Debug Adapter The VS Code Debug Adapter for Apache Camel extension supports the following features: Camel Main mode for XML only. The use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the JMX url. The local use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the PID. You can use it for a single Camel context. Add or remove the breakpoints. The conditional breakpoints with simple language. Inspecting the variable values on suspended breakpoints. Resume a single route instance and resume all route instances. Stepping when the route definition is in the same file. Allow to update variables in scope Debugger, in the message body, in a message header of type String, and an exchange property of type String Supports the command Run Camel Application with JBang and Debug . This command allows a one-click start and Camel debug in simple cases. This command is available through: Command Palette. It requires a valid Camel file opened in the current editor. Contextual menu in File explorer. It is visible to all *.xml , *.java , *.yaml and *.yml . Codelens at the top of a Camel file (the heuristic for the codelens is checking that there is a from and a to or a log on java , xml , and yaml files). Supports the command Run Camel application with JBang . It requires a valid Camel file defined in Yaml DSL (.yaml|.yml) opened in editor. Configuration snippets for Camel debugger launch configuration Configuration snippets to launch a Camel application ready to accept a Camel debugger connection using JBang, or Maven with Camel maven plugin 4.2. Requirements Following points must be considered when using the VS Code Debug Adapter for Apache Camel extension: Java Runtime Environment 17 or later with com.sun.tools.attach.VirtualMachine (available in most JVMs such as Hotspot and OpenJDK) must be installed. The Camel instance to debug must follow these requirements: Camel 3.16 or later Have camel-debug on the classpath. Have JMX enabled. Note For some features, The JBang must be available on a system commandline. 4.3. Installing VS Code Debug Adapter for Apache Camel You can download the VS Code Debug Adapter for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Debug Adapter for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel Debug . Select the Debug Adapter for Apache Camel option from the search results and then click Install. This installs the Debug Adapter for Apache Camel in the VS Code editor. 4.4. Using Debug Adapter Following procedure explains how to debug a camel application using the debug adapter. Procedure Ensure that the jbang binary is available on the system commandline. Open a Camel route which can be started with Camel CLI. Call the command Palette using the keys Ctrl + Shift + P , and select the Run Camel Application with JBang and Debug command or click on the codelens Camel Debug with JBang that appears on top of the file. Wait until the route is started and debugger is connected. Put a breakpoint on the Camel route. Debug. Additional resources Debug Adapter for Apache Camel by Red Hat | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/using-vscode-debug-adapter-extension |
Preface | Preface AMQ Clients is a suite of AMQP 1.0 and JMS clients, adapters, and libraries. It includes JMS 2.0 support and new, event-driven APIs to enable integration into existing applications. AMQ Clients is part of Red Hat AMQ. For more information, see Introducing Red Hat AMQ 7 . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_overview/pr01 |
10.3. DDL | 10.3. DDL Example 10.2. Sample vdb.xml file <vdb name="{vdb-name}" version="1"> <model name="{model-name}" type="PHYSICAL"> <source name="AccountsDB" translator-name="oracle" connection-jndi-name="java:/oracleDS"/> <metadata type="DDL"> **DDL Here** </metadata> </model> </vdb> This is applicable to both source and view models. When DDL is specified as the metadata import type, the model's metadata can be defined as DDL. See the section about DDL Metadata in Red Hat JBoss Data Virtualization Development Guide: Reference Material . | [
"<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"PHYSICAL\"> <source name=\"AccountsDB\" translator-name=\"oracle\" connection-jndi-name=\"java:/oracleDS\"/> <metadata type=\"DDL\"> **DDL Here** </metadata> </model> </vdb>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/ddl |
Chapter 5. Exporting applications | Chapter 5. Exporting applications As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the Import YAML option in the +Add view. Exporting your application helps you to reuse your application resources and saves your time. 5.1. Prerequisites You have installed the gitops-primer Operator from the OperatorHub. Note The Export application option is disabled in the Topology view even after installing the gitops-primer Operator. You have created an application in the Topology view to enable Export application . 5.2. Procedure In the developer perspective, perform one of the following steps: Navigate to the +Add view and click Export application in the Application portability tile. Navigate to the Topology view and click Export application . Click OK in the Export Application dialog box. A notification opens to confirm that the export of resources from your project has started. Optional steps that you might need to perform in the following scenarios: If you have started exporting an incorrect application, click Export application Cancel Export . If your export is already in progress and you want to start a fresh export, click Export application Restart Export . If you want to view logs associated with exporting an application, click Export application and the View Logs link. After a successful export, click Download in the dialog box to download application resources in ZIP format onto your machine. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/odc-exporting-applications |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.5 Documentation Data Grid 8.5 Component Details Supported Configurations for Data Grid 8.5 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/rhdg-docs_datagrid |
Chapter 15. Security | Chapter 15. Security New packages: tang , clevis , jose , luksmeta Network Bound Disk Encryption (NBDE) allows the user to encrypt root volumes of the hard drives on physical and virtual machines without requiring to manually enter password when systems are rebooted. Tang is a server for binding data to network presence. It includes a daemon which provides cryptographic operations for binding to a remote service. The tang package provides the server side of the NBDE project. Clevis is a pluggable framework for automated decryption. It can be used to provide automated decryption of data or even automated unlocking of LUKS volumes. The clevis package provides the client side of the NBDE project. Jose is a C-language implementation of the Javascript Object Signing and Encryption standards. The jose package is a dependency of the clevis and tang packages. LUKSMeta is a simple library for storing metadata in the LUKSv1 header. The luksmeta package is a dependency of the clevis and tang packages. Note that the tang-nagios and clevis-udisk2 subpackages are available only as a Technology Preview. (BZ# 1300697 , BZ#1300696, BZ#1399228, BZ#1399229) New package: usbguard The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. To enforce a user-defined policy, USBGuard uses the Linux kernel USB device authorization feature. The USBGuard framework provides the following components: The daemon component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement The command-line interface to interact with a running USBGuard instance The rule language for writing USB device authorization policies The C++ API for interacting with the daemon component implemented in a shared library (BZ#1395615) openssh rebased to version 7.4 The openssh package has been updated to upstream version 7.4, which provides a number of enhancements, new features, and bug fixes, including: Added support for the resumption of interrupted uploads in SFTP . Added the extended log format for the authentication failure messages. Added a new fingerprint type that uses the SHA-256 algorithm. Added support for using PKCS#11 devices with external PIN entry devices. Removed support for the SSH-1 protocol from the OpenSSH server. Removed support for the legacy v00 cert format. Added the PubkeyAcceptedKeyTypes and HostKeyAlgorithms configuration options for the ssh utility and the sshd daemon to allow disabling key types selectively. Added the AddKeysToAgent option for the OpenSSH client. Added the ProxyJump ssh option and the corresponding -J command-line flag. Added support for key exchange methods for the Diffie-Hellman 2K, 4K, and 8K groups. Added the Include directive for the ssh_config file. Removed support for the UseLogin option. Removed support for the pre-authentication compression in the server. The seccomp filter is now used for the pre-authentication process. (BZ#1341754) audit rebased to version 2.7.6 The audit packages have been updated to upstream version 2.7.6, which provides a number of enhancements, new features, and bug fixes, including: The auditd service now automatically adjusts logging directory permissions when it starts up. This helps keep directory permissions correct after performing a package upgrade. The ausearch utility has a new --format output option. The --format text option presents an event as an English sentence describing what is happening. The --format csv option normalizes logs into a subject, object, action, results, and how it occurred in addition to some metadata fields which is output in the Comma Separated Value (CSV) format. This is suitable for pushing event information into a database, spreadsheet, or other analytic programs to view, chart, or analyze audit events. The auditctl utility can now reset the lost event counter in the kernel through the --reset-lost command-line option. This makes checking for lost events easier since you can reset the value to zero daily. ausearch and aureport now have a boot option for the --start command-line option to find events since the system booted. ausearch and aureport provide a new --escape command-line option to better control what kind of escaping is done to audit fields. It currently supports raw , tty , shell , and shell_quote escaping. auditctl no longer allows rules with the entry filter. This filter has not been supported since Red Hat Enterprise Linux 5. Prior to this release, on Red Hat Enterprise Linux 6 and 7, auditctl moved any entry rule to the exit filter and displayed a warning that the entry filter is deprecated. (BZ# 1381601 ) opensc rebased to version 0.16.0 The OpenSC set of libraries and utilities provides support for working with smart cards. OpenSC focuses on cards that support cryptographic operations and enables their use for authentication, mail encryption, or digital signatures. Notable enhancements in Red Hat Enterprise Linux 7.4 include: OpenSC adds support for Common Access Card (CAC) cards. OpenSC implements the PKCS#11 API and now provides also the CoolKey applet functionality. The opensc packages replace the coolkey packages. Note that the coolkey packages will remain supported for the lifetime of Red Hat Enterprise Linux 7, but new hardware enablement will be provided through the opensc packages. (BZ# 1081088 , BZ# 1373164 ) openssl rebased to version 1.0.2k The openssl package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including: Added support for the Datagram Transport Layer Security TLS (DTLS) protocol version 1.2. Added support for the automatic elliptic curve selection for the ECDHE key exchange in TLS. Added support for the Application-Layer Protocol Negotiation (ALPN). Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH. Note that this version is compatible with the API and ABI in the OpenSSL library version in releases of Red Hat Enterprise Linux 7. (BZ# 1276310 ) openssl-ibmca rebased to version 1.3.0 The openssl-ibmca package has been updated to upstream version 1.3.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: Added support for SHA-512. Cryptographic methods are dynamically loaded when the ibmca engine starts. This enables ibmca to direct cryptographic methods if they are supported in hardware through the libica library. Fixed a bug in block-size handling with stream cipher modes. (BZ#1274385) OpenSCAP 1.2 is NIST-certified OpenSCAP 1.2, the Security Content Automation Protocol (SCAP) scanner, has been certified by the National Institute of Standards and Technology (NIST) as a U. S. government-evaluated configuration and vulnerability scanner for Red Hat Enterprise Linux 6 and 7. OpenSCAP analyzes and evaluates security automation content correctly and it provides the functionality and documentation required by NIST to run in sensitive, security-conscious environments. Additionally, OpenSCAP is the first NIST-certified configuration scanner for evaluating Linux containers. Use cases include evaluating the configuration of Red Hat Enterprise Linux 7 hosts for PCI and DoD Security Technical Implementation Guide (STIG) compliance, as well as performing known vulnerability scans using Red Hat Common Vulnerabilities and Exposures (CVE) data. (BZ#1363826) libreswan rebased to version 3.20 The libreswan packages have been upgraded to upstream version 3.20, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Added support for Opportunistic IPsec (Mesh Encryption), which enables IPsec deployments that cover a large number of hosts using a single simple configuration on all hosts. FIPS further tightened. Added support for routed-based VPN using Virtual Tunnel Interface (VTI). Improved support for non-root configurations. Improved Online Certificate Status Protocol (OCSP) and Certificate Revocation Lists (CRL) support. Added new whack command options: --fipsstatus , --fetchcrls , --globalstatus , and --shuntstatus . Added support for the NAT Opportunistic Encryption (OE) Client Address Translation: leftcat=yes . Added support for the Traffic Flow Confidentiality mechanism: tfc= . Updated cipher preferences as per RFC 4307bis and RFC 7321bis. Added support for Extended Sequence Numbers (ESN): esn=yes . Added support for disabling and increasing the replay window: replay-window= . (BZ# 1399883 ) Audit now supports filtering based on session ID With this update, the Linux Audit system supports user rules to filter audit messages based on the sessionid value. (BZ#1382504) libseccomp now supports IBM Power architectures With this update, the libseccomp library supports the IBM Power, 64-bit IBM Power, and 64-bit little-endian IBM Power architectures, which enables the GNOME rebase. (BZ# 1425007 ) AUDIT_KERN_MODULE now records module loading The AUDIT_KERN_MODULE auxiliary record has been added to AUDIT_SYSCALL records for the init_module() , finit_module() , and delete_module() functions. This information is stored in the audit_context structure. (BZ#1382500) OpenSSH now uses SHA-2 for public key signatures Previously, OpenSSH used the SHA-1 hash algorithm for public key signatures using RSA and DSA keys. SHA-1 is no longer considered secure, and new SSH protocol extension allows to use SHA-2. With this update, SHA-2 is the default algorithm for public key signatures. SHA-1 is available only for backward compatibility purposes. (BZ#1322911) firewalld now supports additional IP sets With this update of the firewalld service daemon, support for the following ipset types has been added: hash:ip,port hash:ip,port,ip hash:ip,port,net hash:ip,mark hash:net,net hash:net,port hash:net,port,net hash:net,iface The following ipset types that provide a combination of sources and destinations at the same time are not supported as sources in firewalld . IP sets using these types are created by firewalld , but their usage is limited to direct rules: hash:ip,port,ip hash:ip,port,net hash:net,net hash:net,port,net The ipset packages have been rebased to upstream version 6.29, and the following ipset types are now additionally supported: hash:mac hash:net,port,net hash:net,net hash:ip,mark (BZ# 1419058 ) firewalld now supports actions on ICMP types in rich rules With this update, the firewalld service daemon allows using Internet Control Message Protocol (ICMP) types in rich rules with the accept, log and mark actions. (BZ# 1409544 ) firewalld now supports disabled automatic helper assignment This update of the firewalld service daemon introduces support for the disabled automatic helper assignment feature. firewalld helpers can be now used without adding additional rules also if automatic helper assignment is turned off. (BZ#1006225) nss and nss-util now use SHA-256 by default With this update, the default configuration of the NSS library has been changed to use a stronger hash algorithm when creating digital signatures. With RSA, EC, and 2048-bit (or longer) DSA keys, the SHA-256 algorithm is now used. Note that also the NSS utilities, such as certutil , crlutil , and cmsutil , now use SHA-256 in their default configurations. (BZ# 1309781 ) Audit filter exclude rules now contain additional fields The exclude filter has been enhanced, and it now contains not only the msgtype field, but also the pid , uid , gid , auid , sessionID , and SELinux types. (BZ#1382508) PROCTITLE now provides the full command in Audit events This update introduces the PROCTITLE record addition to Audit events. PROCTITLE provides the full command being executed. The PROCTITLE value is encoded so it is not able to circumvent the Audit event parser. Note that the PROCTITLE value is still not trusted since it is manipulable by the user-space date. (BZ#1299527) nss-softokn rebased to version 3.28.3 The nss-softokn packages have been upgraded to upstream version 3.28.3, which provides a number of bug fixes and enhancements over the version: Added support for the ChaCha20-Poly1305 (RFC 7539) algorithm used by TLS (RFC 7905), the Internet Key Exchange Protocol (IKE), and IPsec (RFC 7634). For key exchange purposes, added support for the Curve25519/X25519 curve. Added support for the Extended Master Secret (RFC 7627) extension. (BZ# 1369055 ) libica rebased to version 3.0.2 The libica package has been upgraded to upstream version 3.0.2, which provides a number of fixes over the version. Notable additions include support for Federal Information Processing Standards (FIPS) mode support for generating pseudorandom numbers, including enhanced support for Deterministic Random Bit Generator compliant with the updated security specification NIST SP 800-90A. (BZ#1391558) opencryptoki rebased to version 3.6.2 The opencryptoki packages have been upgraded to upstream version 3.6.2, which provides a number of bug fixes and enhancements over the version: Added support for OpenSSL 1.1 Replaced deprecated OpenSSL interfaces. Replaced deprecated libica interfaces. Improved performance for IBM Crypto Accelerator (ICA). Added support for the rc=8, reasoncode=2028 error message in the icsf token. (BZ#1391559) AUDIT_NETFILTER_PKT events are now normalized The AUDIT_NETFILTER_PKT audit events are now simplified and message fields are now displayed in a consistent manner. (BZ#1382494) p11tool now supports writing objects by specifying a stored ID With this update, the p11tool GnuTLS PKCS#11 tool supports the new --id option to write objects by specifying a stored ID. This allows the written object to be addressable by more applications than p11tool . (BZ# 1399232 ) new package: nss-pem This update introduces the nss-pem package, which previously was part of the nss packages, as a separate package. The nss-pem package provides the PEM file reader for Network Security Services (NSS) implemented as a PKCS#11 module. (BZ#1316546) pmrfc3164 replaces pmrfc3164sd in rsyslog With the update of the rsyslog packages, the pmrfc3164sd module, which is used for parsing logs in the BSD syslog protocol format (RFC 3164), has been replaced by the official pmrfc3164 module. The official module does not fully cover the pmrfc3164sd functionality, and thus it is still available in rsyslog . However, it is recommended to use new pmrfc3164 module wherever possible. The pmrfc3164sd module is not supported anymore. (BZ#1431616) libreswan now supports right=%opportunisticgroup With this update, the %opportunisticgroup value for the right option in the conn part of Libreswan configuration is supported. This allows the opportunistic IPsec with X.509 authentication, which significantly reduces the administrative overhead in large environments. (BZ#1324458) ca-certificates now meet Mozilla Firefox 52.2 ESR requirements The Network Security Services (NSS) code and Certificate Authority (CA) list have been updated to meet the recommendations as published with the latest Mozilla Firefox Extended Support Release (ESR). The updated CA list improves compatibility with the certificates that are used in the Internet Public Key Infrastructure (PKI). To avoid certificate validation refusals, Red Hat recommends installing the updated CA list on June 12, 2017. (BZ#1444413) nss now meets Mozilla Firefox 52.2 ESR requirements for certificates The Certificate Authority (CA) list have been updated to meet the recommendations as published with the latest Mozilla Firefox Extended Support Release (ESR). The updated CA list improves compatibility with the certificates that are used in the Internet Public Key Infrastructure (PKI). To avoid certificate validation refusals, Red Hat recommends installing the updated CA list on June 12, 2017. (BZ#1444414) scap-security-guide rebased to version 0.1.33 The scap-security-guide packages have been upgraded to upstream version 0.1.33, which provides a number of bug fixes and enhancements over the version. In particular, this new version enhances existing compliance profiles and expands the scope of coverage to include two new configuration baselines: Extended support for PCI-DSS v3 Control Baseline Extended support for United States Government Commercial Cloud Services (C2S). Extended support for Red Hat Corporate Profile for Certified Cloud Providers. Added support for the Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Red Hat Enterprise Linux 7 profile, aligning to the DISA STIG for Red Hat Enterprise Linux V1R1 profile. Added support for the Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) profile configures Red Hat Enterprise Linux 7 to the NIST Special Publication 800-53 controls identified for securing Controlled Unclassified Information (CUI). Added support for the United States Government Configuration Baseline (USGCB/STIG) profile, developed in partnership with the U. S. National Institute of Standards and Technology (NIST), U. S. Department of Defense, the National Security Agency, and Red Hat. The USGCB/STIG profile implements configuration requirements from the following documents: Committee on National Security Systems Instruction No. 1253 (CNSSI 1253) NIST Controlled Unclassified Information (NIST 800-171) NIST 800-53 control selections for moderate impact systems (NIST 800-53) U. S. Government Configuration Baseline (USGCB) NIAP Protection Profile for General Purpose Operating Systems v4.0 (OSPP v4.0) DISA Operating System Security Requirements Guide (OS SRG) Note that several previously-contained profiles have been removed or merged. (BZ# 1410914 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_security |
4.3. Configuration File Defaults | 4.3. Configuration File Defaults The /etc/multipath.conf configuration file includes a defaults section that sets the user_friendly_names parameter to yes , as follows. This overwrites the default value of the user_friendly_names parameter. The configuration file includes a template of configuration defaults. This section is commented out, as follows. To overwrite the default value for any of the configuration parameters, you can copy the relevant line from this template into the defaults section and uncomment it. For example, to overwrite the path_grouping_policy parameter so that it is multibus rather than the default value of failover , copy the appropriate line from the template to the initial defaults section of the configuration file, and uncomment it, as follows. Table 4.1, "Multipath Configuration Defaults" describes the attributes that are set in the defaults section of the multipath.conf configuration file. These values are used by DM Multipath unless they are overwritten by the attributes specified in the devices and multipaths sections of the multipath.conf file. Table 4.1. Multipath Configuration Defaults Attribute Description polling_interval Specifies the interval between two path checks in seconds. For properly functioning paths, the interval between checks will gradually increase to (4 * polling_interval ). The default value is 5. multipath_dir The directory where the dynamic shared objects are stored. The default value is system dependent, commonly /lib/multipath . find_multipaths Defines the mode for setting up multipath devices. If this parameter is set to yes , then multipath will not try to create a device for every path that is not blacklisted. Instead multipath will create a device only if one of three conditions are met: - There are at least two paths that are not blacklisted with the same WWID. - The user manually forces the creation of the device by specifying a device with the multipath command. - A path has the same WWID as a multipath device that was previously created. Whenever a multipath device is created with find_multipaths set, multipath remembers the WWID of the device so that it will automatically create the device again as soon as it sees a path with that WWID. This allows you to have multipath automatically choose the correct paths to make into multipath devices, without having to edit the multipath blacklist. For instructions on the procedure to follow if you have previously created multipath devices when the find_multipaths parameter was not set, see Section 4.2, "Configuration File Blacklist" . The default value is no . The default multipath.conf file created by mpathconf , however, will enable find_multipaths as of Red Hat Enterprise Linux 7. reassign_maps Enable reassigning of device-mapper maps. With this option, The multipathd daemon will remap existing device-mapper maps to always point to the multipath device, not the underlying block devices. Possible values are yes and no . The default value is yes . verbosity The default verbosity. Higher values increase the verbosity level. Valid levels are between 0 and 6. The default value is 2 . path_selector Specifies the default algorithm to use in determining what path to use for the I/O operation. Possible values include: round-robin 0 : Loop through every path in the path group, sending the same amount of I/O to each. queue-length 0 : Send the bunch of I/O down the path with the least number of outstanding I/O requests. service-time 0 : Send the bunch of I/O down the path with the shortest estimated service time, which is determined by dividing the total size of the outstanding I/O to each path by its relative throughput. The default value is service-time 0 . path_grouping_policy Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include: failover : 1 path per priority group. multibus : all valid paths in 1 priority group. group_by_serial : 1 priority group per detected serial number. group_by_prio : 1 priority group per path priority value. Priorities are determined by callout programs specified as global, per-controller, or per-multipath options. group_by_node_name : 1 priority group per target node name. Target node names are fetched in /sys/class/fc_transport/target*/node_name . The default value is failover . prio Specifies the default function to call to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitable prio value. Possible values include: const : Set a priority of 1 to all paths. emc : Generate the path priority for EMC arrays. alua : Generate the path priority based on the SCSI-3 ALUA settings. As of Red Hat Enterprise Linux 7.3, if you specify prio "alua exclusive_pref_bit" in your device configuration, multipath will create a path group that contains only the path with the pref bit set and will give that path group the highest priority. ontap : Generate the path priority for NetApp arrays. rdac : Generate the path priority for LSI/Engenio RDAC controller. hp_sw : Generate the path priority for Compaq/HP controller in active/standby mode. hds : Generate the path priority for Hitachi HDS Modular storage arrays. The default value is const . features The default extra features of multipath devices, using the format: " number_of_features_plus_arguments feature1 ...". Possible values for features include: queue_if_no_path , which is the same as setting no_path_retry to queue . For information on issues that may arise when using this feature, see Section 5.6, "Issues with queue_if_no_path feature" . retain_attached_hw_handler : If this parameter is set to yes and the SCSI layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the SCSI layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. The default value is no . pg_init_retries n : Retry path group initialization up to n times before failing where 1 <= n <= 50. pg_init_delay_msecs n : Wait n milliseconds between path group initialization retries where 0 <= n <= 60000. path_checker Specifies the default method used to determine the state of the paths. Possible values include: readsector0 : Read the first sector of the device. tur : Issue a TEST UNIT READY command to the device. emc_clariion : Query the EMC Clariion specific EVPD page 0xC0 to determine the path. hp_sw : Check the path state for HP storage arrays with Active/Standby firmware. rdac : Check the path state for LSI/Engenio RDAC storage controller. directio : Read the first sector with direct I/O. The default value is directio . failback Manages path group failback. A value of immediate specifies immediate failback to the highest priority path group that contains active paths. A value of manual specifies that there should not be immediate failback but that failback can happen only with operator intervention. A value of followover specifies that automatic failback should be performed when the first path of a path group becomes active. This keeps a node from automatically failing back when another node requested the failover. A numeric value greater than zero specifies deferred failback, expressed in seconds. The default value is manual . rr_min_io Specifies the number of I/O requests to route to a path before switching to the path in the current path group. This setting is only for systems running kernels older than 2.6.31. Newer systems should use rr_min_io_rq . The default value is 1000. rr_min_io_rq Specifies the number of I/O requests to route to a path before switching to the path in the current path group, using request-based device-mapper-multipath. This setting should be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io . The default value is 1. rr_weight If set to priorities , then instead of sending rr_min_io requests to a path before calling path_selector to choose the path, the number of requests to send is determined by rr_min_io times the path's priority, as determined by the prio function. If set to uniform , all path weights are equal. The default value is uniform . no_path_retry A numeric value for this attribute specifies the number of times the system should attempt to use a failed path before disabling queuing. A value of fail indicates immediate failure, without queuing. A value of queue indicates that queuing should not stop until the path is fixed. The default value is 0. user_friendly_names If set to yes , specifies that the system should use the /etc/multipath/bindings file to assign a persistent and unique alias to the multipath, in the form of mpath n . If set to no , specifies that the system should use the WWID as the alias for the multipath. In either case, what is specified here will be overridden by any device-specific aliases you specify in the multipaths section of the configuration file. The default value is no . queue_without_daemon If set to no , the multipathd daemon will disable queuing for all devices when it is shut down. The default value is no . flush_on_last_del If set to yes , the multipathd daemon will disable queuing when the last path to a device has been deleted. The default value is no . max_fds Sets the maximum number of open file descriptors that can be opened by multipath and the multipathd daemon. This is equivalent to the ulimit -n command. As of the Red Hat Enterprise Linux 6.3 release, the default value is max , which sets this to the system limit from /proc/sys/fs/nr_open . For earlier releases, if this is not set the maximum number of open file descriptors is taken from the calling process; it is usually 1024. To be safe, this should be set to the maximum number of paths plus 32, if that number is greater than 1024. checker_timeout The timeout to use for prioritizers and path checkers that issue SCSI commands with an explicit timeout, in seconds. The default value is taken from sys/block/sd x /device/timeout . fast_io_fail_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before failing I/O to devices on that remote port. This value should be smaller than the value of dev_loss_tmo . Setting this to off will disable the timeout. The default value is determined by the OS. The fast_io_fail_tmo option overrides the values of the recovery_tmo and replacement_timeout options. For details, see Section 4.6, "iSCSI and DM Multipath overrides" . dev_loss_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before removing it from the system. Setting this to infinity will set this to 2147483647 seconds, or 68 years. The default value is determined by the OS. hw_string_match Each device configuration in the devices section of the multipath.conf file will either create its own device configuration or it will modify one of the built-in device configurations. If hw_string_match is set to yes , then if the vendor, product, and revision strings in a user's device configuration exactly match those strings in a built-in device configuration, the built-in configuration is modified by the options in the user's configuration. Otherwise, the user's device configuration is treated as a new configuration. If hw_string_match is set to no , a regular expression match is used instead of a string match. The hw_string_match parameter is set to no by default. retain_attached_hw_handler If this parameter is set to yes and the SCSI layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the SCSI layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. The default value is no . detect_prio If this is set to yes , multipath will first check if the device supports ALUA, and if so it will automatically assign the device the alua prioritizer. If the device does not support ALUA, it will determine the prioritizer as it always does. The default value is no . uid_attribute Provides a unique path identifier. The default value is ID_SERIAL . force_sync (Red Hat Enterprise Linux Release 7.1 and later) If this is set to "yes", it prevents path checkers from running in async mode. This means that only one checker will run at a time. This is useful in the case where many multipathd checkers running in parallel causes significant CPU pressure. The default value is no . delay_watch_checks (Red Hat Enterprise Linux Release 7.2 and later) If set to a value greater than 0, the multipathd daemon will watch paths that have recently become valid for the specified number of checks. If they fail again while they are being watched, when they become valid they will not be used until they have stayed up for the number of consecutive checks specified with delay_wait_checks . This allows you to keep paths that may be unreliable from immediately being put back into use as soon as they come back online. The default value is no . delay_wait_checks (Red Hat Enterprise Linux Release 7.2 and later) If set to a value greater than 0, when a device that has recently come back online fails again within the number of checks specified with delay_watch_checks , the time it comes back online it will be marked and delayed and it will not be used until it has passed the number of checks specified in delay_wait_checks . The default value is no . ignore_new_boot_devs (Red Hat Enterprise Linux Release 7.2 and later) If set to yes , when the node is still in the initramfs file system during early boot, multipath will not create any devices whose WWIDs do not already exist in the initramfs copy of the /etc/multipath/wwids . This feature can be used for booting up during installation, when multipath would otherwise attempt to set itself up on devices that it did not claim when they first appeared by means of the udev rules. This parameter can be set to yes or no . If unset, it defaults to no . retrigger_tries , retrigger_delay (Red Hat Enterprise Linux Release 7.2 and later) The retrigger_tries and retrigger_delay parameters are used in conjunction to make multipathd retrigger uevents if udev failed to completely process the original ones, leaving the device unusable by multipath. The retrigger_tries parameter sets the number of times that multipath will try to retrigger a uevent if a device has not been completely set up. The retrigger_delay parameter sets the number of seconds between retries. Both of these options accept numbers greater than or equal to zero. Setting the retrigger_tries parameter to zero disables retries. Setting the retrigger_delay parameter to zero causes the uevent to be reissued on the loop of the path checker. If the retrigger_tries parameter is unset, it defaults to 3. If the retrigger_delay parameter is unset, it defaults to 10. new_bindings_in_boot (Red Hat Enterprise Linux Release 7.2 and later) The new_bindings_in_boot parameter is used to keep multipath from giving out a user_friendly_name in the initramfs file system that was already given out by the bindings file in the regular file system, an issue that can arise since the user_friendly_names bindings in the initramfs file system get synced with the bindings in the regular file system only when the initramfs file system is remade. When this parameter is set to no multipath will not create any new bindings in the initramfs file system. If a device does not already have a binding in the initramfs copy of /etc/multipath/bindings , multipath will use its WWID as an alias instead of giving it a user_friendly_name . Later in boot, after the node has mounted the regular filesystem, multipath will give out a user_friendly_name to the device. This parameter can be set to yes or no . If unset, it defaults to no . config_dir (Red Hat Enterprise Linux Release 7.2 and later) If set to anything other than "" , multipath will search this directory alphabetically for files ending in ".conf" and it will read configuration information from them, just as if the information were in the /etc/multipath.conf file. This allows you to have one main configuration that you share between machines in addition to a separate machine-specific configuration file or files. The config_dir parameter must either be "" or a fully qualified directory name. This parameter can be set only in the main /etc/multipath.conf file and not in one of the files specified in the config_dir file itself. The default value is /etc/multipath/conf.d . deferred_remove If set to yes , multipathd will do a deferred remove instead of a regular remove when the last path device has been deleted. This ensures that if a multipathed device is in use when a regular remove is performed and the remove fails, the device will automatically be removed when the last user closes the device. The default value is no . log_checker_err If set to once , multipathd logs the first path checker error at verbosity level 2. Any later errors are logged at verbosity level 3 until the device is restored. If it is set to always , multipathd always logs the path checker error at verbosity level 2. The default value is always . skip_kpartx (Red Hat Enterprise Linux Release 7.3 and later) If set to yes , kpartx will not automatically create partitions on the device. This allows users to create a multipath device without creating partitions, even if the device has a partition table. The default value of this option is no . max_sectors_kb (Red Hat Enterprise Linux Release 7.4 and later) Sets the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated. When a multipath device is created, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb parameter is an easy way to set these values before a multipath device is created on top of the path devices and prevent invalid-sized I/O operations from being passed. If this parameter is not set by the user, the path devices have it set by their device driver, and the multipath device inherits it from the path devices. remove_retries (Red Hat Enterprise Linux Release 7.4 and later) Sets how may times multipath will retry removing a device that is in use. Between each attempt, multipath will sleep 1 second. The default value is 0, which means that multipath will not retry the remove. disable_changed_wwids (Red Hat Enterprise Linux Release 7.4 and later) If set to yes and the WWID of a path device changes while it is part of a multipath device, multipath will disable access to the path device until the WWID of the path is restored to the WWID of the multipath device. The default value is no , which does not check if a path's WWID has changed. detect_path_checker (Red Hat Enterprise Linux Release 7.4 and later) If set to yes , multipath will try to detect if the device supports ALUA. If so, the device will automatically use the tur path checker. If not, the path_checker will be selected as usual. The default value is no . reservation_key This is the service action reservation key used by mpathpersist . It must be set for all multipath devices using persistent reservations, and it must be the same as the RESERVATION KEY field of the PERSISTENT RESERVE OUT parameter list which contains an 8-byte value provided by the application client to the device server to identify the I_T nexus. If the --param-aptpl option is used when registering the key with mpathpersist , :aptpl must be appended to the end of the reservation key. As of Red Hat Enterprise Linux Release 7.5, this parameter can be set to file , which will store the RESERVATION KEY registered by mpath‐persist in the prkeys file. The multipathd daemon will then use this key to register additional paths as they appear. When the registration is removed, the RESERVATION KEY is removed from the prkeys file. It is unset by default. prkeys_file (Red Hat Enterprise Linux Release 7.5 and later) The full path name of the prkeys file, which is used by the multipathd daemon to keep track of the reservation key used for a specific WWID when the reservation_key parameter is set to file . The default value is /etc/multipath/prkeys . all_tg_pt (Red Hat Enterprise Linux Release 7.6 and later) If this option is set to yes , when mpathpersist registers keys it will treat a key registered from one host to one target port as going from one host to all target ports. This must be set to yes to successfully use mpathpersist on arrays that automatically set and clear registration keys on all target ports from a host, instead of per target port per host. The default value is no . | [
"defaults { user_friendly_names yes }",
"#defaults { polling_interval 10 path_selector \"round-robin 0\" path_grouping_policy multibus uid_attribute ID_SERIAL prio alua path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes #}",
"defaults { user_friendly_names yes path_grouping_policy multibus }"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/config_file_defaults |
Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1] | Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1] Description ImageDigestMirrorSet holds cluster-wide information about how to handle registry mirror rules on using digest pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 12.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageDigestMirrors array imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. imageDigestMirrors[] object ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. 12.1.2. .spec.imageDigestMirrors Description imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. Type array 12.1.3. .spec.imageDigestMirrors[] Description ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their digests. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy" Other cluster configuration, including (but not limited to) other imageDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 12.1.4. .status Description status contains the observed state of the resource. Type object 12.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagedigestmirrorsets DELETE : delete collection of ImageDigestMirrorSet GET : list objects of kind ImageDigestMirrorSet POST : create an ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} DELETE : delete an ImageDigestMirrorSet GET : read the specified ImageDigestMirrorSet PATCH : partially update the specified ImageDigestMirrorSet PUT : replace the specified ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status GET : read status of the specified ImageDigestMirrorSet PATCH : partially update status of the specified ImageDigestMirrorSet PUT : replace status of the specified ImageDigestMirrorSet 12.2.1. /apis/config.openshift.io/v1/imagedigestmirrorsets HTTP method DELETE Description delete collection of ImageDigestMirrorSet Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageDigestMirrorSet Table 12.2. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageDigestMirrorSet Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 202 - Accepted ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.2. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet HTTP method DELETE Description delete an ImageDigestMirrorSet Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageDigestMirrorSet Table 12.9. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageDigestMirrorSet Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageDigestMirrorSet Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.3. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet HTTP method GET Description read status of the specified ImageDigestMirrorSet Table 12.16. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageDigestMirrorSet Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageDigestMirrorSet Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/imagedigestmirrorset-config-openshift-io-v1 |
Chapter 4. Clair security scanner | Chapter 4. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. For more information about Clair security scanner, see Vulnerability reporting with Clair on Red Hat Quay . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/managing_access_and_permissions/clair-vulnerability-scanner |
Chapter 5. GFS2 file system repair | Chapter 5. GFS2 file system repair When nodes fail with the file system mounted, file system journaling allows fast recovery. However, if a storage device loses power or is physically disconnected, file system corruption may occur. (Journaling cannot be used to recover from storage subsystem failures.) When that type of corruption occurs, you can recover the GFS2 file system by using the fsck.gfs2 command. Important The fsck.gfs2 command must be run only on a file system that is unmounted from all nodes. When the file system is being managed as a Pacemaker cluster resource, you can disable the file system resource, which unmounts the file system. After running the fsck.gfs2 command, you enable the file system resource again. The timeout value specified with the --wait option of the pcs resource disable indicates a value in seconds. Note that even if a file system is part of a resource group, as in an encrypted file system deployment, you need to disable only the file system resource in order to run the fsck command on the file system. You must not disable the entire resource group. To ensure that fsck.gfs2 command does not run on a GFS2 file system at boot time, you can set the run_fsck parameter of the options argument when creating the GFS2 file system resource in a cluster. Specifying "run_fsck=no" will indicate that you should not run the fsck command. 5.1. Determining required memory for running fsck.gfs2 Running the fsck.gfs2 command may require system memory above and beyond the memory used for the operating system and kernel. Larger file systems in particular may require additional memory to run this command. The following table shows approximate values of memory that may be required to run fsck.gfs2 file systems on GFS2 file systems that are 1TB, 10TB, and 100TB in size with a block size of 4K. GFS2 file system size Approximate memory required to run fsck.gfs2 1 TB 0.16 GB 10 TB 1.6 GB 100 TB 16 GB Note that a smaller block size for the file system would require a larger amount of memory. For example, GFS2 file systems with a block size of 1K would require four times the amount of memory indicated in this table. 5.2. Repairing a gfs2 filesystem The format of the fsck.gfs2 command to repair a GFS2 filesystem is as follows: -y The -y flag causes all questions to be answered with yes . With the -y flag specified, the fsck.gfs2 command does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS2 file system resides. In this example, the GFS2 file system residing on block device /dev/testvg/testlv is repaired. All queries to repair are automatically answered with yes . | [
"pcs resource disable --wait= timeoutvalue resource_id [fsck.gfs2] pcs resource enable resource_id",
"fsck.gfs2 -y BlockDevice",
"fsck.gfs2 -y /dev/testvg/testlv Initializing fsck Validating Resource Group index. Level 1 RG check. (level 1 passed) Clearing journals (this may take a while) Journals cleared. Starting pass1 Pass1 complete Starting pass1b Pass1b complete Starting pass1c Pass1c complete Starting pass2 Pass2 complete Starting pass3 Pass3 complete Starting pass4 Pass4 complete Starting pass5 Pass5 complete Writing changes to disk fsck.gfs2 complete"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/assembly_gfs2-filesystem-repair-configuring-gfs2-file-systems |
Chapter 6. Gaining Privileges | Chapter 6. Gaining Privileges System administrators, and in some cases users, need to perform certain tasks with administrative access. Accessing the system as the root user is potentially dangerous and can lead to widespread damage to the system and data. This chapter covers ways to gain administrative privileges using the setuid programs such as su and sudo . These programs allow specific users to perform tasks which would normally be available only to the root user while maintaining a higher level of control and system security. See the Red Hat Enterprise Linux 7 Security Guide for more information on administrative controls, potential dangers and ways to prevent data loss resulting from improper use of privileged access. 6.1. Configuring Administrative Access Using the su Utility When a user executes the su command, they are prompted for the root password and, after authentication, are given a root shell prompt. Once logged in using the su command, the user is the root user and has absolute administrative access to the system. Note that this access is still subject to the restrictions imposed by SELinux, if it is enabled. In addition, once a user has become root , it is possible for them to use the su command to change to any other user on the system without being prompted for a password. Because this program is so powerful, administrators within an organization may want to limit who has access to the command. One of the simplest ways to do this is to add users to the special administrative group called wheel . To do this, type the following command as root : In the command, replace username with the user name you want to add to the wheel group. You can also use the Users settings tool to modify group memberships, as follows. Note that you need administrator privileges to perform this procedure. Press the Super key to enter the Activities Overview, type Users and then press Enter . The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . To enable making changes, click the Unlock button, and enter a valid administrator password. Click a user icon in the left column to display the user's properties in the right pane. Change the Account Type from Standard to Administrator . This will add the user to the wheel group. See Section 4.2, "Managing Users in a Graphical Environment" for more information about the Users tool. After you add the desired users to the wheel group, it is advisable to only allow these specific users to use the su command. To do this, edit the Pluggable Authentication Module (PAM) configuration file for su , /etc/pam.d/su . Open this file in a text editor and uncomment the following line by removing the # character: This change means that only members of the administrative group wheel can switch to another user using the su command. 6.2. Configuring Administrative Access Using the sudo Utility The sudo command offers another approach to giving users administrative access. When trusted users precede an administrative command with sudo , they are prompted for their own password. Then, when they have been authenticated and assuming that the command is permitted, the administrative command is executed as if they were the root user. The basic format of the sudo command is as follows: In the above example, command would be replaced by a command normally reserved for the root user, such as mount . The sudo command allows for a high degree of flexibility. For instance, only users listed in the /etc/sudoers configuration file are allowed to use the sudo command and the command is executed in the user's shell, not a root shell. This means the root shell can be completely disabled as shown in the Red Hat Enterprise Linux 7 Security Guide . Each successful authentication using the sudo command is logged to the file /var/log/messages and the command issued along with the issuer's user name is logged to the file /var/log/secure . If additional logging is required, use the pam_tty_audit module to enable TTY auditing for specified users by adding the following line to your /etc/pam.d/system-auth file: where pattern represents a comma-separated listing of users with an optional use of globs. For example, the following configuration will enable TTY auditing for the root user and disable it for all other users: Important Configuring the pam_tty_audit PAM module for TTY auditing records only TTY input. This means that, when the audited user logs in, pam_tty_audit records the exact keystrokes the user makes into the /var/log/audit/audit.log file. For more information, see the pam_tty_audit(8) manual page. Another advantage of the sudo command is that an administrator can allow different users access to specific commands based on their needs. Administrators wanting to edit the sudo configuration file, /etc/sudoers , should use the visudo command. To give someone full administrative privileges, type visudo and add a line similar to the following in the user privilege specification section: This example states that the user, juan , can use sudo from any host and execute any command. The example below illustrates the granularity possible when configuring sudo : This example states that any member of the users system group can issue the command /sbin/shutdown -h now as long as it is issued from the console. The man page for sudoers has a detailed listing of options for this file. You can also configure sudo users who do not need to provide any password by using the NOPASSWD option in the /etc/sudoers file: However, even for such users, sudo runs Pluggable Authentication Module (PAM) account management modules, which enables checking for restrictions imposed by PAM modules outside of the authentication phase. This ensures that PAM modules work properly. For example, in case of the pam_time module, the time-based account restriction does not fail. Warning Always include sudo in the list of allowed services in all PAM-based access control rules. Otherwise, users will receive a "permission denied" error message when they try to access sudo but access is forbidden based on current access control rules. For more information, see the Red Hat Knowledgebase article After patching to Red Hat Enterprise Linux 7.6, sudo gives a permission denied error. . Important There are several potential risks to keep in mind when using the sudo command. You can avoid them by editing the /etc/sudoers configuration file using visudo as described above. Leaving the /etc/sudoers file in its default state gives every user in the wheel group unlimited root access. By default, sudo stores the password for a five minute timeout period. Any subsequent uses of the command during this period will not prompt the user for a password. This could be exploited by an attacker if the user leaves his workstation unattended and unlocked while still being logged in. This behavior can be changed by adding the following line to the /etc/sudoers file: where value is the desired timeout length in minutes. Setting the value to 0 causes sudo to require a password every time. If an account is compromised, an attacker can use sudo to open a new shell with administrative privileges: Opening a new shell as root in this or similar fashion gives the attacker administrative access for a theoretically unlimited amount of time, bypassing the timeout period specified in the /etc/sudoers file and never requiring the attacker to input a password for sudo again until the newly opened session is closed. 6.3. Additional Resources While programs allowing users to gain administrative privileges are a potential security risk, security itself is beyond the scope of this particular book. You should therefore refer to the resources listed below for more information regarding security and privileged access. Installed Documentation su (1) - The manual page for su provides information regarding the options available with this command. sudo (8) - The manual page for sudo includes a detailed description of this command and lists options available for customizing its behavior. pam (8) - The manual page describing the use of Pluggable Authentication Modules (PAM) for Linux. Online Documentation Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 provides a more detailed look at potential security issues pertaining to the setuid programs as well as techniques used to alleviate these risks. See Also Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line. | [
"~]# usermod -a -G wheel username",
"#auth required pam_wheel.so use_uid",
"sudo command",
"session required pam_tty_audit.so disable= pattern enable= pattern",
"session required pam_tty_audit.so disable=* enable=root",
"juan ALL=(ALL) ALL",
"%users localhost=/usr/sbin/shutdown -h now",
"user_name ALL=(ALL) NOPASSWD: ALL",
"Defaults timestamp_timeout= value",
"sudo /bin/bash"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-gaining_privileges |
25.20. Controlling the SCSI Command Timer and Device Status | 25.20. Controlling the SCSI Command Timer and Device Status The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or complete. Afterwards, the SCSI layer will activate the driver's error handler. When the error handler is triggered, it attempts the following operations in order (until one successfully executes): Abort the command. Reset the device. Reset the bus. Reset the host. If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that device will be failed, until the problem is corrected and the user sets the device to running . The process is different, however, if a device uses the Fibre Channel protocol and the rport is blocked. In such cases, the drivers wait for several seconds for the rport to become online again before activating the error handler. This prevents devices from becoming offline due to temporary transport problems. Device States To display the state of a device, use: To set a device to the running state, use: Command Timer To control the command timer, modify the /sys/block/ device-name /device/timeout file: Replace value in the command with the timeout value, in seconds, that you want to implement. | [
"cat /sys/block/ device-name /device/state",
"echo running > /sys/block/ device-name /device/state",
"echo value > /sys/block/ device-name /device/timeout"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/scsi-command-timer-device-status |
Chapter 49. messaging | Chapter 49. messaging This chapter describes the commands under the messaging command. 49.1. messaging claim create Create claim and return a list of claimed messages Usage: Table 49.1. Positional arguments Value Summary <queue_name> Name of the queue to be claim Table 49.2. Command arguments Value Summary -h, --help Show this help message and exit --ttl <ttl> Time to live in seconds for claim --grace <grace> The message grace period in seconds --limit <limit> Claims a set of messages, up to limit Table 49.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.2. messaging claim query Display claim details Usage: Table 49.7. Positional arguments Value Summary <queue_name> Name of the claimed queue <claim_id> Id of the claim Table 49.8. Command arguments Value Summary -h, --help Show this help message and exit Table 49.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.3. messaging claim release Delete a claim Usage: Table 49.13. Positional arguments Value Summary <queue_name> Name of the claimed queue <claim_id> Claim id to delete Table 49.14. Command arguments Value Summary -h, --help Show this help message and exit 49.4. messaging claim renew Renew a claim Usage: Table 49.15. Positional arguments Value Summary <queue_name> Name of the claimed queue <claim_id> Claim id Table 49.16. Command arguments Value Summary -h, --help Show this help message and exit --ttl <ttl> Time to live in seconds for claim --grace <grace> The message grace period in seconds Table 49.17. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.18. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.5. messaging flavor create Create a pool flavor Usage: Table 49.21. Positional arguments Value Summary <flavor_name> Name of the flavor Table 49.22. Command arguments Value Summary -h, --help Show this help message and exit --pool_list <pool_list> Pool list for flavor --capabilities <capabilities> Describes flavor-specific capabilities, this option is only available in client api version < 2 . Table 49.23. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.24. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.25. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.6. messaging flavor delete Delete a pool flavor Usage: Table 49.27. Positional arguments Value Summary <flavor_name> Name of the flavor Table 49.28. Command arguments Value Summary -h, --help Show this help message and exit 49.7. messaging flavor list List available pool flavors Usage: Table 49.29. Command arguments Value Summary -h, --help Show this help message and exit --marker <flavor_name> Flavor's paging marker --limit <limit> Page size limit --detailed If show detailed capabilities of flavor Table 49.30. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.31. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.8. messaging flavor show Display pool flavor details Usage: Table 49.34. Positional arguments Value Summary <flavor_name> Flavor to display (name) Table 49.35. Command arguments Value Summary -h, --help Show this help message and exit Table 49.36. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.38. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.39. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.9. messaging flavor update Update a flavor's attributes Usage: Table 49.40. Positional arguments Value Summary <flavor_name> Name of the flavor Table 49.41. Command arguments Value Summary -h, --help Show this help message and exit --pool_list <pool_list> Pool list the flavor sits on --capabilities <capabilities> Describes flavor-specific capabilities. Table 49.42. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.44. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.45. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.10. messaging health Display detailed health status of Zaqar server Usage: Table 49.46. Command arguments Value Summary -h, --help Show this help message and exit 49.11. messaging homedoc Display detailed resource doc of Zaqar server Usage: Table 49.47. Command arguments Value Summary -h, --help Show this help message and exit 49.12. messaging message list List all messages for a given queue Usage: Table 49.48. Positional arguments Value Summary <queue_name> Name of the queue Table 49.49. Command arguments Value Summary -h, --help Show this help message and exit --message-ids <message_ids> List of messages' ids to retrieve --limit <limit> Maximum number of messages to get --echo Whether to get this client's own messages --include-claimed Whether to include claimed messages --include-delayed Whether to include delayed messages --client-id <client_id> A uuid for each client instance. Table 49.50. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.51. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.52. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.53. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.13. messaging message post Post messages for a given queue Usage: Table 49.54. Positional arguments Value Summary <queue_name> Name of the queue <messages> Messages to be posted. Table 49.55. Command arguments Value Summary -h, --help Show this help message and exit --client-id <client_id> A uuid for each client instance. 49.14. messaging ping Check if Zaqar server is alive or not Usage: Table 49.56. Command arguments Value Summary -h, --help Show this help message and exit Table 49.57. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.58. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.59. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.60. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.15. messaging pool create Create a pool Usage: Table 49.61. Positional arguments Value Summary <pool_name> Name of the pool <pool_uri> Storage engine uri <pool_weight> Weight of the pool Table 49.62. Command arguments Value Summary -h, --help Show this help message and exit --flavor <flavor> Flavor of the pool --pool_options <pool_options> An optional request component related to storage- specific options Table 49.63. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.64. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.65. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.66. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.16. messaging pool delete Delete a pool Usage: Table 49.67. Positional arguments Value Summary <pool_name> Name of the pool Table 49.68. Command arguments Value Summary -h, --help Show this help message and exit 49.17. messaging pool list List available Pools Usage: Table 49.69. Command arguments Value Summary -h, --help Show this help message and exit --marker <pool_name> Pool's paging marker --limit <limit> Page size limit --detailed Detailed output Table 49.70. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.71. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.72. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.18. messaging pool show Display pool details Usage: Table 49.74. Positional arguments Value Summary <pool_name> Pool to display (name) Table 49.75. Command arguments Value Summary -h, --help Show this help message and exit Table 49.76. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.77. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.78. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.79. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.19. messaging pool update Update a pool attribute Usage: Table 49.80. Positional arguments Value Summary <pool_name> Name of the pool Table 49.81. Command arguments Value Summary -h, --help Show this help message and exit --pool_uri <pool_uri> Storage engine uri --pool_weight <pool_weight> Weight of the pool --flavor <flavor> Flavor of the pool --pool_options <pool_options> An optional request component related to storage- specific options Table 49.82. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.83. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.84. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.85. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.20. messaging queue create Create a queue Usage: Table 49.86. Positional arguments Value Summary <queue_name> Name of the queue Table 49.87. Command arguments Value Summary -h, --help Show this help message and exit Table 49.88. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.89. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.90. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.91. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.21. messaging queue delete Delete a queue Usage: Table 49.92. Positional arguments Value Summary <queue_name> Name of the queue Table 49.93. Command arguments Value Summary -h, --help Show this help message and exit 49.22. messaging queue get metadata Get queue metadata Usage: Table 49.94. Positional arguments Value Summary <queue_name> Name of the queue Table 49.95. Command arguments Value Summary -h, --help Show this help message and exit Table 49.96. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.97. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.98. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.99. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.23. messaging queue list List available queues Usage: Table 49.100. Command arguments Value Summary -h, --help Show this help message and exit --marker <queue_id> Queue's paging marker --limit <limit> Page size limit --detailed If show detailed information of queue Table 49.101. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.102. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.103. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.104. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.24. messaging queue purge Purge a queue Usage: Table 49.105. Positional arguments Value Summary <queue_name> Name of the queue Table 49.106. Command arguments Value Summary -h, --help Show this help message and exit --resource_types <resource_types> Resource types want to be purged. 49.25. messaging queue set metadata Set queue metadata Usage: Table 49.107. Positional arguments Value Summary <queue_name> Name of the queue <queue_metadata> Queue metadata, all the metadata of the queue will be replaced by queue_metadata Table 49.108. Command arguments Value Summary -h, --help Show this help message and exit 49.26. messaging queue signed url Create a pre-signed url Usage: Table 49.109. Positional arguments Value Summary <queue_name> Name of the queue Table 49.110. Command arguments Value Summary -h, --help Show this help message and exit --paths <paths> Allowed paths in a comma-separated list. options: messages, subscriptions, claims --ttl-seconds <ttl_seconds> Length of time (in seconds) until the signature expires --methods <methods> Http methods to allow as a comma-separated list. Options: GET, HEAD, OPTIONS, POST, PUT, DELETE Table 49.111. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.112. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.113. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.114. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.27. messaging queue stats Get queue stats Usage: Table 49.115. Positional arguments Value Summary <queue_name> Name of the queue Table 49.116. Command arguments Value Summary -h, --help Show this help message and exit Table 49.117. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.118. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.119. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.120. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.28. messaging subscription create Create a subscription for queue Usage: Table 49.121. Positional arguments Value Summary <queue_name> Name of the queue to subscribe to <subscriber> Subscriber which will be notified <ttl> Time to live of the subscription in seconds Table 49.122. Command arguments Value Summary -h, --help Show this help message and exit --options <options> Metadata of the subscription in json format Table 49.123. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.124. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.125. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.126. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.29. messaging subscription delete Delete a subscription Usage: Table 49.127. Positional arguments Value Summary <queue_name> Name of the queue for the subscription <subscription_id> Id of the subscription Table 49.128. Command arguments Value Summary -h, --help Show this help message and exit 49.30. messaging subscription list List available subscriptions Usage: Table 49.129. Positional arguments Value Summary <queue_name> Name of the queue to subscribe to Table 49.130. Command arguments Value Summary -h, --help Show this help message and exit --marker <subscription_id> Subscription's paging marker, the id of the last subscription of the page --limit <limit> Page size limit, default value is 20 Table 49.131. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 49.132. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 49.133. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.134. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.31. messaging subscription show Display subscription details Usage: Table 49.135. Positional arguments Value Summary <queue_name> Name of the queue to subscribe to <subscription_id> Id of the subscription Table 49.136. Command arguments Value Summary -h, --help Show this help message and exit Table 49.137. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.138. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.139. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.140. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 49.32. messaging subscription update Update a subscription Usage: Table 49.141. Positional arguments Value Summary <queue_name> Name of the queue to subscribe to <subscription_id> Id of the subscription Table 49.142. Command arguments Value Summary -h, --help Show this help message and exit --subscriber <subscriber> Subscriber which will be notified --ttl <ttl> Time to live of the subscription in seconds --options <options> Metadata of the subscription in json format Table 49.143. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 49.144. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 49.145. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 49.146. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack messaging claim create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--ttl <ttl>] [--grace <grace>] [--limit <limit>] <queue_name>",
"openstack messaging claim query [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <queue_name> <claim_id>",
"openstack messaging claim release [-h] <queue_name> <claim_id>",
"openstack messaging claim renew [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--ttl <ttl>] [--grace <grace>] <queue_name> <claim_id>",
"openstack messaging flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--pool_list <pool_list>] [--capabilities <capabilities>] <flavor_name>",
"openstack messaging flavor delete [-h] <flavor_name>",
"openstack messaging flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker <flavor_name>] [--limit <limit>] [--detailed]",
"openstack messaging flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor_name>",
"openstack messaging flavor update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--pool_list <pool_list>] [--capabilities <capabilities>] <flavor_name>",
"openstack messaging health [-h]",
"openstack messaging homedoc [-h]",
"openstack messaging message list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--message-ids <message_ids>] [--limit <limit>] [--echo] [--include-claimed] [--include-delayed] [--client-id <client_id>] <queue_name>",
"openstack messaging message post [-h] [--client-id <client_id>] <queue_name> <messages>",
"openstack messaging ping [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack messaging pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--flavor <flavor>] [--pool_options <pool_options>] <pool_name> <pool_uri> <pool_weight>",
"openstack messaging pool delete [-h] <pool_name>",
"openstack messaging pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker <pool_name>] [--limit <limit>] [--detailed]",
"openstack messaging pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <pool_name>",
"openstack messaging pool update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--pool_uri <pool_uri>] [--pool_weight <pool_weight>] [--flavor <flavor>] [--pool_options <pool_options>] <pool_name>",
"openstack messaging queue create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <queue_name>",
"openstack messaging queue delete [-h] <queue_name>",
"openstack messaging queue get metadata [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <queue_name>",
"openstack messaging queue list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker <queue_id>] [--limit <limit>] [--detailed]",
"openstack messaging queue purge [-h] [--resource_types <resource_types>] <queue_name>",
"openstack messaging queue set metadata [-h] <queue_name> <queue_metadata>",
"openstack messaging queue signed url [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--paths <paths>] [--ttl-seconds <ttl_seconds>] [--methods <methods>] <queue_name>",
"openstack messaging queue stats [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <queue_name>",
"openstack messaging subscription create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--options <options>] <queue_name> <subscriber> <ttl>",
"openstack messaging subscription delete [-h] <queue_name> <subscription_id>",
"openstack messaging subscription list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker <subscription_id>] [--limit <limit>] <queue_name>",
"openstack messaging subscription show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <queue_name> <subscription_id>",
"openstack messaging subscription update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--subscriber <subscriber>] [--ttl <ttl>] [--options <options>] <queue_name> <subscription_id>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/messaging |
Chapter 2. Managing your cluster resources | Chapter 2. Managing your cluster resources You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster. 2.1. Interacting with your cluster resources You can interact with cluster resources by using the OpenShift CLI ( oc ) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources command can be edited. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or you have installed the oc CLI tool. Procedure To see which configuration Operators have been applied, run the following command: USD oc api-resources -o name | grep config.openshift.io To see what cluster resources you can configure, run the following command: USD oc explain <resource_name>.config.openshift.io To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command: USD oc get <resource_name>.config -o yaml To edit the cluster resource configuration, run the following command: USD oc edit <resource_name>.config -o yaml | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/support/managing-cluster-resources |
Chapter 9. Premigration checklists | Chapter 9. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 9.1. Resources ❏ If your application uses an internal service network or an external route for communicating with services, the relevant route exists. ❏ If your application uses cluster-level resources, you have re-created them on the target cluster. ❏ You have excluded persistent volumes (PVs), image streams, and other resources that you do not want to migrate. ❏ PV data has been backed up in case an application displays unexpected behavior after migration and corrupts the data. 9.2. Source cluster ❏ The cluster meets the minimum hardware requirements . ❏ You have installed the correct legacy Migration Toolkit for Containers Operator version: operator-3.7.yml on OpenShift Container Platform version 3.7. operator.yml on OpenShift Container Platform versions 3.9 to 4.5. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have performed all the run-once tasks . ❏ You have performed all the environment health checks . ❏ You have checked for PVs with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ You have removed old builds, deployments, and images from each namespace to be migrated by pruning . ❏ The internal registry uses a supported storage type . ❏ Direct image migration only: The internal registry is exposed to external traffic. ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The identity provider is working. 9.3. Target cluster ❏ You have installed Migration Toolkit for Containers Operator version 1.5.1. ❏ All MTC prerequisites are met. ❏ The cluster meets the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ The cluster has storage classes defined for the storage types used by the source cluster, for example, block volume, file system, or object storage. Note NFS does not require a defined storage class. ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. If an application uses an internal image in the openshift namespace that is not supported by OpenShift Container Platform 4.9, you can manually update the OpenShift Container Platform 3 image stream tag with podman . ❏ The target cluster and the replication repository have sufficient storage space. ❏ The identity provider is working. ❏ DNS records for your application exist on the target cluster. ❏ Set the value of the annotation.openshift.io/host.generated parameter to true for each OpenShift Container Platform route to update its host name for the target cluster. Otherwise, the migrated routes retain the source cluster host name. ❏ Certificates that your application uses exist on the target cluster. ❏ You have configured appropriate firewall rules on the target cluster. ❏ You have correctly configured load balancing on the target cluster. ❏ If you migrate objects to an existing namespace on the target cluster that has the same name as the namespace being migrated from the source, the target namespace contains no objects of the same name and type as the objects being migrated. Note Do not create namespaces for your application on the target cluster before migration because this might cause quotas to change. 9.4. Performance ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The memory and CPU usage of the nodes are healthy. ❏ The etcd disk performance of the clusters has been checked with fio . | [
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migrating_from_version_3_to_4/premigration-checklists-3-4 |
Chapter 4. Related Information | Chapter 4. Related Information Where can I find documentation for SAP products on RHEL and other Red Hat products? How to tie a system to a specific update of Red Hat Enterprise Linux? Upgrading SAP environments from RHEL 8 to RHEL 9 Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/rhel_for_sap_subscriptions_and_repositories/asmb_related_info_rhel-for-sap-subscriptions-and-repositories-9 |
Chapter 12. Configuring providers | Chapter 12. Configuring providers The server is built with extensibility in mind and for that it provides a number of Service Provider Interfaces or SPIs, each one responsible for providing a specific capability to the server. In this chapter, you are going to understand the core concepts around the configuration of SPIs and their respective providers. After reading this chapter, you should be able to use the concepts and the steps herein explained to install, uninstall, enable, disable, and configure any provider, including those you have implemented to extend the server capabilities in order to better fulfill your requirements. 12.1. Configuration option format Providers can be configured by using a specific configuration format. The format consists of: The <spi-id> is the name of the SPI you want to configure. The <provider-id> is the id of the provider you want to configure. This is the id set to the corresponding provider factory implementation. The <property> is the actual name of the property you want to set for a given provider. All those names (for spi, provider, and property) should be in lower case and if the name is in camel-case such as myKeycloakProvider , it should include dashes ( - ) before upper-case letters as follows: my-keycloak-provider . Taking the HttpClientSpi SPI as an example, the name of the SPI is connectionsHttpClient and one of the provider implementations available is named default . In order to set the connectionPoolSize property you would use a configuration option as follows: 12.2. Setting a provider configuration option Provider configuration options are provided when starting the server. See all support configuration sources and formats for options in Configuring Red Hat build of Keycloak . For example via a command line option: Setting the connection-pool-size for the default provider of the connections-http-client SPI bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10 12.3. Configuring a default provider Depending on the SPI, multiple provider implementations can co-exist but only one of them is going to be used at runtime. For these SPIs, a default provider is the primary implementation that is going to be active and used at runtime. To configure a provider as the default you should run the build command as follows: Marking the mycustomprovider provider as the default provider for the email-template SPI bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider In the example above, we are using the provider property to set the id of the provider we want to mark as the default. 12.4. Enabling and disabling a provider To enable or disable a provider you should run the build command as follows: Enabling a provider bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true To disable a provider, use the same command and set the enabled property to false . 12.5. Installing and uninstalling a provider Custom providers should be packaged in a Java Archive (JAR) file and copied to the providers directory of the distribution. After that, you must run the build command in order to update the server's provider registry with the implementations from the JAR file. This step is needed in order to optimize the server runtime so that all providers are known ahead-of-time rather than discovered only when starting the server or at runtime. To uninstall a provider, you should remove the JAR file from the providers directory and run the build command again. 12.6. Using third-party dependencies When implementing a provider you might need to use some third-party dependency that is not available from the server distribution. In this case, you should copy any additional dependency to the providers directory and run the build command. Once you do that, the server is going to make these additional dependencies available at runtime for any provider that depends on them. 12.7. References Configuring Red Hat build of Keycloak Server Developer Documentation | [
"spi-<spi-id>-<provider-id>-<property>=<value>",
"spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider",
"bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/configuration-provider- |
Chapter 2. OpenShift Container Platform overview | Chapter 2. OpenShift Container Platform overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. OpenShift Container Platform enables you to do the following: Provide developers and IT organizations with cloud application platforms that can be used for deploying applications on secure and scalable resources. Require minimal configuration and management overhead. Bring the Kubernetes platform to customer data centers and cloud. Meet security, privacy, compliance, and governance requirements. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1. Glossary of common terms for OpenShift Container Platform This glossary defines common Kubernetes and OpenShift Container Platform terms. Kubernetes Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. Containers Containers are application instances and components that run in OCI-compliant containers on the worker nodes. A container is the runtime of an Open Container Initiative (OCI)-compliant image. An image is a binary application. A worker node can run many containers. A node capacity is related to memory and CPU capabilities of the underlying resources whether they are cloud, hardware, or virtualized. Pod A pod is one or more containers deployed together on one host. It consists of a colocated group of containers with shared resources such as volumes and IP addresses. A pod is also the smallest compute unit defined, deployed, and managed. In OpenShift Container Platform, pods replace individual application containers as the smallest deployable unit. Pods are the orchestrated unit in OpenShift Container Platform. OpenShift Container Platform schedules and runs all containers in a pod on the same node. Complex applications are made up of many pods, each with their own containers. They interact externally and also with another inside the OpenShift Container Platform environment. Replica set and replication controller The Kubernetes replica set and the OpenShift Container Platform replication controller are both available. The job of this component is to ensure the specified number of pod replicas are running at all times. If pods exit or are deleted, the replica set or replication controller starts more. If more pods are running than needed, the replica set deletes as many as necessary to match the specified number of replicas. Deployment and DeploymentConfig OpenShift Container Platform implements both Kubernetes Deployment objects and OpenShift Container Platform DeploymentConfigs objects. Users may select either. Deployment objects control how an application is rolled out as pods. They identify the name of the container image to be taken from the registry and deployed as a pod on a node. They set the number of replicas of the pod to deploy, creating a replica set to manage the process. The labels indicated instruct the scheduler onto which nodes to deploy the pod. The set of labels is included in the pod definition that the replica set instantiates. Deployment objects are able to update the pods deployed onto the worker nodes based on the version of the Deployment objects and the various rollout strategies for managing acceptable application availability. OpenShift Container Platform DeploymentConfig objects add the additional features of change triggers, which are able to automatically create new versions of the Deployment objects as new versions of the container image are available, or other changes. Service A service defines a logical set of pods and access policies. It provides permanent internal IP addresses and hostnames for other applications to use as pods are created and destroyed. Service layers connect application components together. For example, a front-end web service connects to a database instance by communicating with its service. Services allow for simple internal load balancing across application components. OpenShift Container Platform automatically injects service information into running containers for ease of discovery. Route A route is a way to expose a service by giving it an externally reachable hostname, such as www.example.com. Each route consists of a route name, a service selector, and optionally a security configuration. A router can consume a defined route and the endpoints identified by its service to provide a name that lets external clients reach your applications. While it is easy to deploy a complete multi-tier application, traffic from anywhere outside the OpenShift Container Platform environment cannot reach the application without the routing layer. Build A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform leverages Kubernetes by creating containers from build images and pushing them to the integrated registry. Project OpenShift Container Platform uses projects to allow groups of users or developers to work together, serving as the unit of isolation and collaboration. It defines the scope of resources, allows project administrators and collaborators to manage resources, and restricts and tracks the user's resources with quotas and limits. A project is a Kubernetes namespace with additional annotations. It is the central vehicle for managing access to resources for regular users. A project lets a community of users organize and manage their content in isolation from other communities. Users must receive access to projects from administrators. But cluster administrators can allow developers to create their own projects, in which case users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Projects are also known as namespaces. Operators An Operator is a Kubernetes-native application. The goal of an Operator is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations or shell scripts or automation software such as Ansible. It was outside your Kubernetes cluster and hard to integrate. With Operators, all of this changes. Operators are purpose-built for your applications. They implement and automate common Day 1 activities such as installation and configuration as well as Day 2 activities such as scaling up and down, reconfiguration, updates, backups, fail overs, and restores in a piece of software running inside your Kubernetes cluster by integrating natively with Kubernetes concepts and APIs. This is called a Kubernetes-native application. With Operators, applications must not be treated as a collection of primitives, such as pods, deployments, services, or config maps. Instead, Operators should be treated as a single object that exposes the options that make sense for the application. 2.2. Understanding OpenShift Container Platform OpenShift Container Platform is a Kubernetes environment for managing the lifecycle of container-based applications and their dependencies on various computing platforms, such as bare metal, virtualized, on-premise, and in cloud. OpenShift Container Platform deploys, configures and manages containers. OpenShift Container Platform offers usability, stability, and customization of its components. OpenShift Container Platform utilises a number of computing resources, known as nodes. A node has a lightweight, secure operating system based on Red Hat Enterprise Linux (RHEL), known as Red Hat Enterprise Linux CoreOS (RHCOS). After a node is booted and configured, it obtains a container runtime, such as CRI-O or Docker, for managing and running the images of container workloads scheduled to it. The Kubernetes agent, or kubelet schedules container workloads on the node. The kubelet is responsible for registering the node with the cluster and receiving the details of container workloads. OpenShift Container Platform configures and manages the networking, load balancing and routing of the cluster. OpenShift Container Platform adds cluster services for monitoring the cluster health and performance, logging, and for managing upgrades. The container image registry and OperatorHub provide Red Hat certified products and community built softwares for providing various application services within the cluster. These applications and services manage the applications deployed in the cluster, databases, frontends and user interfaces, application runtimes and business automation, and developer services for development and testing of container applications. You can manage applications within the cluster either manually by configuring deployments of containers running from pre-built images or through resources known as Operators. You can build custom images from pre-build images and source code, and store these custom images locally in an internal, private or public registry. The Multicluster Management layer can manage multiple clusters including their deployment, configuration, compliance and distribution of workloads in a single console. 2.3. Installing OpenShift Container Platform The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain. For more information about the installation process, the supported platforms, and choosing a method of installing and preparing your cluster, see the following: OpenShift Container Platform installation overview Installation process Supported platforms for OpenShift Container Platform clusters Selecting a cluster installation type 2.3.1. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 2.4. Steps 2.4.1. For developers Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications . Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base. Use the developer CLI tool ( odo ) : The odo CLI tool lets developers create single or multi-component applications and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration, and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams working on microservices-based architecture. Deploy Helm charts : Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Manage deployments using the Workloads page or OpenShift CLI ( oc ). Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.13. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.13. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Learn about OpenShift Container Platform application programming interface endpoints. 2.4.2. For administrators Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.13 control plane. See how OpenShift Container Platform control plane and worker nodes are managed and updated through the Machine API and Operators . Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Understanding the OpenShift Update Service : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Monitor clusters : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/getting_started/openshift-overview |
Chapter 8. Deployment considerations | Chapter 8. Deployment considerations This section provides an overview of general topics to be considered when planning a Red Hat Satellite deployment together with recommendations and references to more specific documentation. 8.1. Satellite Server configuration The first step to a working Satellite infrastructure is installing an instance of Satellite Server on a dedicated Red Hat Enterprise Linux 8 server. For more information about installing Satellite Server in a connected network, see Installing Satellite Server in a connected network environment . On large Satellite deployments, you can improve performance by configuring your Satellite with predefined tuning profiles. For more information, see Tuning Satellite Server with Predefined Profiles in Installing Satellite Server in a connected network environment . For more information about installing Satellite Server in a disconnected network, see Installing Satellite Server in a disconnected network environment . On large Satellite deployments, you can improve performance by configuring your Satellite with predefined tuning profiles. For more information, see Tuning Satellite Server with Predefined Profiles in Installing Satellite Server in a disconnected network environment . Adding Red Hat subscription manifests to Satellite Server A Red Hat Subscription Manifest is a set of encrypted files that contains your subscription information. Satellite Server uses this information to access the CDN and find what repositories are available for the associated subscription. For instructions on how to create and import a Red Hat Subscription Manifest see Managing Red Hat Subscriptions in Managing content . Red Hat Satellite requires a single manifest for each organization configured on the Satellite. If you plan to use the Organization feature of Satellite to manage separate units of your infrastructure under one Red Hat Network account, then assign subscriptions from the one account to per-organization manifests as required. If you plan to have more than one Red Hat Network account, or if you want to manage systems belonging to another entity that is also a Red Hat Network account holder, then you and the other account holder can assign subscriptions, as required, to manifests. A customer that does not have a Satellite subscription can create a Subscription Asset Manager manifest, which can be used with Satellite, if they have other valid subscriptions. You can then use the multiple manifests in one Satellite Server to manage multiple organizations. If you must manage systems but do not have access to the subscriptions for the RPMs, you must use Red Hat Enterprise Linux Satellite Add-On. For more information, see Satellite Add-On . The following diagram shows two Red Hat Network account holders, who want their systems to be managed by the same Satellite installation. In this scenario, Example Corporation 1 can allocate any subset of their 60 subscriptions, in this example they have allocated 30, to a manifest. This can be imported into the Satellite as a distinct Organization. This allows system administrators the ability to manage Example Corporation 1's systems using Satellite completely independently of Example Corporation 2's organizations (R&D, Operations, and Engineering). Figure 8.1. Satellite Server with multiple manifests When creating a Red Hat Subscription Manifest: Add the subscription for Satellite Server to the manifest if planning a disconnected or self-registered Satellite Server. This is not necessary for a connected Satellite Server that is subscribed using the Subscription Manager utility on the base system. Add subscriptions for all Capsule Servers you want to create. Add subscriptions for all Red Hat Products you want to manage with Satellite. Note the date when the subscriptions are due to expire and plan for their renewal before the expiry date. Create one manifest per organization. You can use multiple manifests and they can be from different Red Hat subscriptions. Red Hat Satellite allows the use of future-dated subscriptions in the manifest. This enables uninterrupted access to repositories when future-dated subscriptions are added to a manifest before the expiry date of existing subscriptions. Note that the Red Hat Subscription Manifest can be modified and reloaded to Satellite Server in case of any changes in your infrastructure, or when adding more subscriptions. Manifests should not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the Satellite web UI it will unregister all of your content hosts. 8.2. Satellite Server with external database When you install Satellite, the satellite-installer command creates databases on the same server that you install Satellite. Depending on your requirements, moving to external databases can provide increased working memory for Satellite, which can improve response times for database operating requests. Moving to external databases distributes the workload and can increase the capacity for performance tuning. Consider using external databases if you plan to use your Satellite deployment for the following scenarios: Frequent remote execution tasks. This creates a high volume of records in PostgreSQL and generates heavy database workloads. High disk I/O workloads from frequent repository synchronization or Content View publishing. This causes Satellite to create a record in PostgreSQL for each job. High volume of hosts. High volume of synced content. For more information about using an external database, see Using External Databases with Satellite in Installing Satellite Server in a connected network environment . 8.3. Locations and topology This section outlines general considerations that should help you to specify your Satellite deployment scenario. The most common deployment scenarios are listed in Chapter 7, Common deployment scenarios . The defining questions are: How many Capsule Servers do I need? - The number of geographic locations where your organization operates should translate to the number of Capsule Servers. By assigning a Capsule to each location, you decrease the load on Satellite Server, increase redundancy, and reduce bandwidth usage. Satellite Server itself can act as a Capsule (it contains an integrated Capsule by default). This can be used in single location deployments and to provision the base system's of Capsule Servers. Using the integrated Capsule to communicate with hosts in remote locations is not recommended as it can lead to suboptimal network utilization. What services will be provided by Capsule Servers? - After establishing the number of Capsules, decide what services will be enabled on each Capsule. Even though the whole stack of content and configuration management capabilities is available, some infrastructure services (DNS, DHCP, TFTP) can be outside of a Satellite administrator's control. In such case, Capsules have to integrate with those external services (see Section 7.5, "Capsule with external services" ). Is my Satellite Server required to be disconnected from the Internet? - Disconnected Satellite is a common deployment scenario (see Section 7.4, "Disconnected Satellite" ). If you require frequent updates of Red Hat content on a disconnected Satellite, plan an additional Satellite instance for Inter-Satellite Synchronization. What compute resources do I need for my hosts? - Apart from provisioning bare-metal hosts, you can use various compute resources supported by Satellite. To learn about provisioning on different compute resources see Provisioning hosts . 8.4. Content sources The Red Hat Subscription Manifest determines what Red Hat repositories are accessible from your Satellite Server. Once you enable a Red Hat repository, an associated Satellite Product is created automatically. For distributing content from custom sources you need to create products and repositories manually. Red Hat repositories are signed with GPG keys by default, and it is recommended to create GPG keys also for your custom repositories. Yum repositories that contain only RPM packages support the On demand download policy, which reduces synchronization time and storage space. The On demand download policy saves space and time by only downloading packages when requested by hosts. For detailed instructions on setting up content sources, see Importing Content in Managing content . A custom repository within Satellite Server is in most cases populated with content from an external staging server. Such servers lie outside of the Satellite infrastructure, however, it is recommended to use a revision control system (such as Git) on these servers to have better control over the custom content. 8.5. Content lifecycle Satellite provides features for precise management of the content lifecycle. A lifecycle environment represents a stage in the content lifecycle, a Content View is a filtered set of content, and can be considered as a defined subset of content. By associating Content Views with lifecycle environments, you make content available to hosts in a defined way. For a detailed overview of the content management process see Importing Custom Content in Managing content . The following section provides general scenarios for deploying content views as well as lifecycle environments. The default lifecycle environment called Library gathers content from all connected sources. It is not recommended to associate hosts directly with the Library as it prevents any testing of content before making it available to hosts. Instead, create a lifecycle environment path that suits your content workflow. The following scenarios are common: A single lifecycle environment - content from Library is promoted directly to the production stage. This approach limits the complexity but still allows for testing the content within the Library before making it available to hosts. A single lifecycle environment path - both operating system and applications content is promoted through the same path. The path can consist of several stages (for example Development , QA , Production ), which enables thorough testing but requires additional effort. Application specific lifecycle environment paths - each application has a separate path, which allows for individual application release cycles. You can associate specific compute resources with application lifecycle stages to facilitate testing. On the other hand, this scenario increases the maintenance complexity. The following content view scenarios are common: All in one content view - a content view that contains all necessary content for the majority of your hosts. Reducing the number of content views is an advantage in deployments with constrained resources (time, storage space) or with uniform host types. However, this scenario limits the content view capabilities such as time based snapshots or intelligent filtering. Any change in content sources affects a proportion of hosts. Host specific content view - a dedicated content view for each host type. This approach can be useful in deployments with a small number of host types (up to 30). However, it prevents sharing content across host types as well as separation based on criteria other than the host type (for example between operating system and applications). With critical updates every content view has to be updated, which increases maintenance efforts. Host specific composite content view - a dedicated combination of content views for each host type. This approach enables separating host specific and shared content, for example you can have dedicated content views for the operating system and application content. By using a composite, you can manage your operating system and applications separately and at different frequencies. Component based content view - a dedicated content view for a specific application. For example a database content view can be included into several composite content views. This approach allows for greater standardization but it leads to an increased number of content views. The optimal solution depends on the nature of your host environment. Avoid creating a large number of content views, but keep in mind that the size of a content view affects the speed of related operations (publishing, promoting). Also make sure that when creating a subset of packages for the content view, all dependencies are included as well. Note that kickstart repositories should not be added to content views, as they are used for host provisioning only. 8.6. Content deployment Content deployment manages errata and packages on content hosts. Satellite can be configured to perform remote execution over MQTT/HTTPS (pull-based) or SSH (push-based). While remote execution is enabled on Satellite Server by default, it is disabled on Capsule Servers and content hosts. You must enable it manually. 8.7. Provisioning Satellite provides several features to help you automate the host provisioning, including provisioning templates, configuration management with Puppet, and host groups for standardized provisioning of host roles. For a description of the provisioning workflow see Provisioning Workflow in Provisioning hosts . The same guide contains instructions for provisioning on various compute resources. 8.8. Role based authentication Assigning a role to a user enables controlling access to Satellite components based on a set of permissions. You can think of role based authentication as a way of hiding unnecessary objects from users who are not supposed to interact with them. There are various criteria for distinguishing among different roles within an organization. Apart from the administrator role, the following types are common: Roles related to applications or parts of infrastructure - for example, roles for owners of Red Hat Enterprise Linux as the operating system versus owners of application servers and database servers. Roles related to a particular stage of the software lifecycle - for example, roles divided among the development, testing, and production phases, where each phase has one or more owners. Roles related to specific tasks - such as security manager or license manager. When defining a custom role, consider the following recommendations: Define the expected tasks and responsibilities - define the subset of the Satellite infrastructure that will be accessible to the role as well as actions permitted on this subset. Think of the responsibilities of the role and how it would differ from other roles. Use predefined roles whenever possible - Satellite provides a number of sample roles that can be used alone or as part of a role combination. Copying and editing an existing role can be a good start for creating a custom role. Consider all affected entities - for example, a content view promotion automatically creates new Puppet Environments for the particular lifecycle environment and content view combination. Therefore, if a role is expected to promote content views, it also needs permissions to create and edit Puppet Environments. Consider areas of interest - even though a role has a limited area of responsibility, there might be a wider area of interest. Therefore, you can grant the role a read only access to parts of Satellite infrastructure that influence its area of responsibility. This allows users to get earlier access to information about potential upcoming changes. Add permissions step by step - test your custom role to make sure it works as intended. A good approach in case of problems is to start with a limited set of permissions, add permissions step by step, and test continuously. For instructions on defining roles and assigning them to users, see Managing Users and Roles in Administering Red Hat Satellite . The same guide contains information on configuring external authentication sources. 8.9. Additional tasks This section provides a short overview of selected Satellite capabilities that can be used for automating certain tasks or extending the core usage of Satellite: Discovering bare-metal hosts - the Satellite Discovery plugin enables automatic bare-metal discovery of unknown hosts on the provisioning network. These new hosts register themselves to Satellite Server and the Puppet Agent on the client uploads system facts collected by Facter, such as serial ID, network interface, memory, and disk information. After registration you can initialize provisioning of those discovered hosts. For more information, see Creating Hosts from Discovered Hosts in Provisioning hosts . Backup management - backup and disaster recovery instructions, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Using remote execution, you can also configure recurring backup tasks on hosts. For more information on remote execution see Configuring and Setting up Remote Jobs in Managing hosts . Security management - Satellite supports security management in various ways, including update and errata management, OpenSCAP integration for system verification, update and security compliance reporting, and fine grained role based authentication. Find more information on errata management and OpenSCAP concepts in Managing hosts . Incident management - Satellite supports the incident management process by providing a centralized overview of all systems including reporting and email notifications. Detailed information on each host is accessible from Satellite Server, including the event history of recent changes. Satellite is also integrated with Red Hat Insights . Scripting with Hammer and API - Satellite provides a command line tool called Hammer that provides a CLI equivalent to the majority of web UI procedures. In addition, you can use the access to the Satellite API to write automation scripts in a selected programming language. For more information, see Hammer CLI guide and API guide . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/chap-architecture_guide-deployment_considerations |
Appendix D. Teiid Designer Views | Appendix D. Teiid Designer Views D.1. Teiid Designer Views Views are dockable windows which present data from your models or your modeling session in various forms. Some views support particular Model Editors and their content is dependent on workspace selection. This section summarizes most of the views used and available in Teiid Designer . The full list is presented in the main menu's Window > Show View > Other... dialog under the Teiid Designer category. Figure D.1. Show View Dialog | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/appe-teiid_designer_views |
Chapter 51. Ref | Chapter 51. Ref Both producer and consumer are supported The Ref component is used for lookup of existing endpoints bound in the Registry. 51.1. URI format Where someName is the name of an endpoint in the Registry (usually, but not always, the Spring registry). If you are using the Spring registry, someName would be the bean ID of an endpoint in the Spring registry. 51.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 51.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 51.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 51.3. Component Options The Ref component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 51.4. Endpoint Options The Ref endpoint is configured using URI syntax: with the following path and query parameters: 51.4.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of endpoint to lookup in the registry. String 51.4.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 51.5. Runtime lookup This component can be used when you need dynamic discovery of endpoints in the Registry where you can compute the URI at runtime. Then you can look up the endpoint using the following code: // lookup the endpoint String myEndpointRef = "bigspenderOrder"; Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange); And you could have a list of endpoints defined in the Registry such as: <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <endpoint id="normalOrder" uri="activemq:order.slow"/> <endpoint id="bigspenderOrder" uri="activemq:order.high"/> </camelContext> 51.6. Sample In the sample below we use the ref: in the URI to reference the endpoint with the spring ID, endpoint2 : You could, of course, have used the ref attribute instead: <to uri="ref:endpoint2"/> Which is the more common way to write it. 51.7. Spring Boot Auto-Configuration When using ref with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency> The component supports 4 options, which are listed below. Name Description Default Type camel.component.ref.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ref.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ref.enabled Whether to enable auto configuration of the ref component. This is enabled by default. Boolean camel.component.ref.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"ref:someName[?options]",
"ref:name",
"// lookup the endpoint String myEndpointRef = \"bigspenderOrder\"; Endpoint endpoint = context.getEndpoint(\"ref:\" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange);",
"<camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <endpoint id=\"normalOrder\" uri=\"activemq:order.slow\"/> <endpoint id=\"bigspenderOrder\" uri=\"activemq:order.high\"/> </camelContext>",
"<to uri=\"ref:endpoint2\"/>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-ref-component-starter |
7.3 Release Notes | 7.3 Release Notes Red Hat Enterprise Linux 7.3 Release Notes for Red Hat Enterprise Linux 7.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_on_openshift_overview/making-open-source-more-inclusive |
function::ntohl | function::ntohl Name function::ntohl - Convert 32-bit long from network to host order Synopsis Arguments x Value to convert | [
"ntohl:long(x:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ntohl |
Chapter 5. Migration | Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.5. 5.1. Migrating to MariaDB 10.3 The rh-mariadb103 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb103 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb103 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Collection is still installed and even running. The rh-mariadb103 Software Collection includes the rh-mariadb103-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb103*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb103* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , and Migrating to MariaDB 10.2 . Note The rh-mariadb103 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb102 and rh-mariadb103 Software Collections The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. For more information regarding MariaDB 10.3 , see the upstream documentation about changes and about upgrading . 5.1.2. Upgrading from the rh-mariadb102 to the rh-mariadb103 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb102 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb102 server. systemctl stop rh-mariadb102-mariadb.service Install the rh-mariadb103 Software Collection, including the subpackage providing the mysql_upgrade utility. yum install rh-mariadb103-mariadb-server rh-mariadb103-mariadb-server-utils Note that it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb103 , which is stored in the /etc/opt/rh/rh-mariadb103/my.cnf file and the /etc/opt/rh/rh-mariadb103/my.cnf.d/ directory. Compare it with configuration of rh-mariadb102 stored in /etc/opt/rh/rh-mariadb102/my.cnf and /etc/opt/rh/rh-mariadb102/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb102 Software Collection is stored in the /var/opt/rh/rh-mariadb102/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb103/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb103 database server. systemctl start rh-mariadb103-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb103 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb103 -- mysql_upgrade -p Note that when the rh-mariadb103*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. 5.2. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.5 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 and Migrating to MariaDB 10.1 . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.2.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in the Red Hat Software Collections 3.0 Release Notes . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.2.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p Note that when the rh-mariadb102*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. 5.3. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. For instructions, see Migration to MySQL 5.7 . 5.3.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.3.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. 5.4. Migrating to MongoDB 3.6 Red Hat Software Collections 3.5 is released with MongoDB 3.6 , provided by the rh-mongodb36 Software Collection and available only for Red Hat Enterprise Linux 7. The rh-mongodb36 Software Collection includes the rh-mongodb36-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb36*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb36* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.4.1. Notable Differences Between MongoDB 3.4 and MongoDB 3.6 General Changes The rh-mongodb36 Software Collection introduces the following significant general change: On Non-Uniform Access Memory (NUMA) hardware, it is possible to configure systemd services to be launched using the numactl command; see the upstream recommendation . To use MongoDB with the numactl command, you need to install the numactl RPM package and change the /etc/opt/rh/rh-mongodb36/sysconfig/mongod and /etc/opt/rh/rh-mongodb36/sysconfig/mongos configuration files accordingly. Compatibility Changes MongoDB 3.6 includes various minor changes that can affect compatibility with versions of MongoDB : MongoDB binaries now bind to localhost by default, so listening on different IP addresses needs to be explicitly enabled. Note that this is already the default behavior for systemd services distributed with MongoDB Software Collections. The MONGODB-CR authentication mechanism has been deprecated. For databases with users created by MongoDB versions earlier than 3.0, upgrade authentication schema to SCRAM . The HTTP interface and REST API have been removed Arbiters in replica sets have priority 0 Master-slave replication has been deprecated For detailed compatibility changes in MongoDB 3.6 , see the upstream release notes . Backwards Incompatible Features The following MongoDB 3.6 features are backwards incompatible and require the version to be set to 3.6 using the featureCompatibilityVersion command : UUID for collections USDjsonSchema document validation Change streams Chunk aware secondaries View definitions, document validators, and partial index filters that use version 3.6 query features Sessions and retryable writes Users and roles with authenticationRestrictions For details regarding backward incompatible changes in MongoDB 3.6 , see the upstream release notes . 5.4.2. Upgrading from the rh-mongodb34 to the rh-mongodb36 Software Collection Important Before migrating from the rh-mongodb34 to the rh-mongodb36 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb34/lib/mongodb/ directory. In addition, see the Compatibility Changes to ensure that your applications and deployments are compatible with MongoDB 3.6 . To upgrade to the rh-mongodb36 Software Collection, perform the following steps. To be able to upgrade, the rh-mongodb34 instance must have featureCompatibilityVersion set to 3.4 . Check featureCompatibilityVersion : ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Install the MongoDB servers and shells from the rh-mongodb36 Software Collections: ~]# yum install rh-mongodb36 Stop the MongoDB 3.4 server: ~]# systemctl stop rh-mongodb34-mongod.service Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb34/lib/mongodb/* /var/opt/rh/rh-mongodb36/lib/mongodb/ Configure the rh-mongodb36-mongod daemon in the /etc/opt/rh/rh-mongodb36/mongod.conf file. Start the MongoDB 3.6 server: ~]# systemctl start rh-mongodb36-mongod.service Enable backwards incompatible features: ~]USD scl enable rh-mongodb36 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Note After upgrading, it is recommended to run the deployment first without enabling the backwards incompatible features for a burn-in period of time, to minimize the likelihood of a downgrade. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.5. Migrating to MongoDB 3.4 The rh-mongodb34 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, provides MongoDB 3.4 . 5.5.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.5.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.6. Migrating to PostgreSQL 12 Red Hat Software Collections 3.5 is distributed with PostgreSQL 12 , available only for Red Hat Enterprise Linux 7. The rh-postgresql12 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. See Section 5.7, "Migrating to PostgreSQL 9.6" for instructions how to migrate to an earlier version or when using Red Hat Enterprise Linux 6. The rh-postgresql12 Software Collection includes the rh-postgresql12-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl12*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl12* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 12 , see the upstream compatibility notes for PostgreSQL 11 and PostgreSQL 12 . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql10 and rh-postgresql12 Software Colections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql10 rh-postgresql12 Executables /usr/bin/ /opt/rh/rh-postgresql10/root/usr/bin/ /opt/rh/rh-postgresql12/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql10/root/usr/lib64/ /opt/rh/rh-postgresql12/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql10/lib/pgsql/data/ /var/opt/rh/rh-postgresql12/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql10/lib/pgsql/backups/ /var/opt/rh/rh-postgresql12/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql10/root/usr/include/pgsql/ /opt/rh/rh-postgresql12/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.6.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 12 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql12 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 12, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.6.2. Migrating from the PostgreSQL 10 Software Collection to the PostgreSQL 12 Software Collection To migrate your data from the rh-postgresql10 Software Collection to the rh-postgresql12 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 10 to PostgreSQL 12 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql10/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql10-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql10-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgreqsql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql10-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql10 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql10-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7. Migrating to PostgreSQL 9.6 PostgreSQL 9.6 is available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and it can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. Important In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . Note that it is currently impossible to upgrade PostgreSQL from 9.5 to 9.6 in a container in an OpenShift environment that is configured with Gluster file volumes. 5.7.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.2. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.7.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.5. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.6. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.7. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.8. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.8. Migrating to nginx 1.16 The root directory for the rh-nginx116 Software Collection is located in /opt/rh/rh-nginx116/root/ . The error log is stored in /var/opt/rh/rh-nginx116/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx116/nginx/ directory. Configuration files in nginx 1.16 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx116/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.14 to nginx 1.16 , back up all your data, including web pages located in the /opt/rh/nginx114/root/ tree and configuration files located in the /etc/opt/rh/nginx114/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx114/root/ tree, replicate those changes in the new /opt/rh/rh-nginx116/root/ and /etc/opt/rh/rh-nginx116/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.8 , nginx 1.10 , nginx 1.12 , or nginx 1.14 to nginx 1.16 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.9. Migrating to Redis 5 Redis 3.2 , provided by the rh-redis32 Software Collection, is mostly a strict subset of Redis 4.0 , which is mostly a strict subset of Redis 5.0 . Therefore, no major issues should occur when upgrading from version 3.2 to version 5.0. To upgrade a Redis Cluster to version 5.0, a mass restart of all the instances is needed. Compatibility Notes The format of RDB files has been changed. Redis 5 is able to read formats of all the earlier versions, but earlier versions are incapable of reading the Redis 5 format. Since version 4.0, the Redis Cluster bus protocol is no longer compatible with Redis 3.2 . For minor non-backward compatible changes, see the upstream release notes for version 4.0 and version 5.0 . | [
"[mysqld] default_authentication_plugin=caching_sha2_password"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-Migration |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_the_most_from_your_support_experience/proc_providing-feedback-on-red-hat-documentation_getting-the-most-from-your-support-experience |
function::stack_size | function::stack_size Name function::stack_size - Return the size of the kernel stack. Synopsis Arguments None General Syntax stack_size: long Description This function returns the size of the kernel stack. | [
"function stack_size:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-stack-size |
3.3. Red Hat OpenStack Platform 17.1 for RHEL 8 x86_64 (RPMs) | 3.3. Red Hat OpenStack Platform 17.1 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the openstack-17.1-for-rhel-8-x86_64-rpms repository. Table 3.3. Red Hat OpenStack Platform 17.1 for RHEL 8 x86_64 (RPMs) Packages Name Version Advisory ansible-collection-ansible-netcommon 2.2.0-1.2.el8ost.1 RHEA-2023:4580 ansible-collection-ansible-posix 1.2.0-1.el8ost.1 RHEA-2023:4580 ansible-collection-ansible-utils 2.3.0-2.el8ost RHEA-2023:4580 ansible-collection-community-general 4.0.0-1.1.el8ost.1 RHEA-2023:4580 ansible-collection-containers-podman 1.9.3-1.el8ost RHEA-2023:4580 ansible-collections-openstack 1.9.1-1.20230420063954.0e9a6f2.el8ost RHEA-2023:4580 ansible-config_template 2.0.1-17.1.20230517174107.7951228.el8ost RHEA-2023:4580 ansible-freeipa 1.9.2-1.1.el8ost RHEA-2023:4580 ansible-pacemaker 1.0.4-17.1.20221026103752.7c10fdb.el8ost RHEA-2023:4580 ansible-role-atos-hsm 1.0.1-1.20220727095609.ccd3896.el8ost RHEA-2023:4580 ansible-role-chrony 1.3.1-1.20230510014016.0111661.el8ost RHEA-2023:4580 ansible-role-collectd-config 0.0.2-1.20220727095634.1992666.el8ost RHEA-2023:4580 ansible-role-container-registry 1.4.1-17.1.20221116233745.a091b9c.el8ost RHEA-2023:4580 ansible-role-lunasa-hsm 1.1.1-17.1.20220727100600.6ebc8f4.el8ost RHEA-2023:4580 ansible-role-metalsmith-deployment 1.4.4-1.20230515184003.5e7461e.el8ost RHEA-2023:4580 ansible-role-openstack-operations 0.0.1-17.1.20220727212820.2ab288f.el8ost RHEA-2023:4580 ansible-role-qdr-config 0.0.1-1.20220727212923.b456651.el8ost RHEA-2023:4580 ansible-role-redhat-subscription 1.2.1-17.1.20220818145419.eefe501.el8ost RHEA-2023:4580 ansible-role-thales-hsm 3.0.1-1.20220727213721.e0f4569.el8ost RHEA-2023:4580 ansible-role-tripleo-modify-image 1.5.1-17.1.20230211115447.b6eedb6.el8ost RHEA-2023:4580 ansible-tripleo-ipa 0.3.1-1.20230519134015.el8ost RHEA-2023:4580 ansible-tripleo-ipsec 11.0.1-1.20220727100612.b5559c8.el8ost RHEA-2023:4580 collectd 5.12.0-10.el8ost RHEA-2023:4580 collectd-amqp1 5.12.0-10.el8ost RHEA-2023:4580 collectd-apache 5.12.0-10.el8ost RHEA-2023:4580 collectd-bind 5.12.0-10.el8ost RHEA-2023:4580 collectd-ceph 5.12.0-10.el8ost RHEA-2023:4580 collectd-chrony 5.12.0-10.el8ost RHEA-2023:4580 collectd-connectivity 5.12.0-10.el8ost RHEA-2023:4580 collectd-curl 5.12.0-10.el8ost RHEA-2023:4580 collectd-curl_json 5.12.0-10.el8ost RHEA-2023:4580 collectd-curl_xml 5.12.0-10.el8ost RHEA-2023:4580 collectd-disk 5.12.0-10.el8ost RHEA-2023:4580 collectd-dns 5.12.0-10.el8ost RHEA-2023:4580 collectd-generic-jmx 5.12.0-10.el8ost RHEA-2023:4580 collectd-hugepages 5.12.0-10.el8ost RHEA-2023:4580 collectd-ipmi 5.12.0-10.el8ost RHEA-2023:4580 collectd-iptables 5.12.0-10.el8ost RHEA-2023:4580 collectd-libpod-stats 1.0.5-5.el8ost RHEA-2023:4580 collectd-log_logstash 5.12.0-10.el8ost RHEA-2023:4580 collectd-mcelog 5.12.0-10.el8ost RHEA-2023:4580 collectd-mysql 5.12.0-10.el8ost RHEA-2023:4580 collectd-netlink 5.12.0-10.el8ost RHEA-2023:4580 collectd-openldap 5.12.0-10.el8ost RHEA-2023:4580 collectd-ovs-events 5.12.0-10.el8ost RHEA-2023:4580 collectd-ovs-stats 5.12.0-10.el8ost RHEA-2023:4580 collectd-pcie-errors 5.12.0-10.el8ost RHEA-2023:4580 collectd-ping 5.12.0-10.el8ost RHEA-2023:4580 collectd-pmu 5.12.0-10.el8ost RHEA-2023:4580 collectd-procevent 5.12.0-10.el8ost RHEA-2023:4580 collectd-python 5.12.0-10.el8ost RHEA-2023:4580 collectd-rdt 5.12.0-10.el8ost RHEA-2023:4580 collectd-sensors 5.12.0-10.el8ost RHEA-2023:4580 collectd-sensubility 0.2.0-2.el8ost RHEA-2023:4580 collectd-smart 5.12.0-10.el8ost RHEA-2023:4580 collectd-snmp 5.12.0-10.el8ost RHEA-2023:4580 collectd-snmp-agent 5.12.0-10.el8ost RHEA-2023:4580 collectd-sysevent 5.12.0-10.el8ost RHEA-2023:4580 collectd-turbostat 5.12.0-10.el8ost RHEA-2023:4580 collectd-utils 5.12.0-10.el8ost RHEA-2023:4580 collectd-virt 5.12.0-10.el8ost RHEA-2023:4580 collectd-write_http 5.12.0-10.el8ost RHEA-2023:4580 collectd-write_kafka 5.12.0-10.el8ost RHEA-2023:4580 collectd-write_prometheus 5.12.0-10.el8ost RHEA-2023:4580 cpp-hocon 0.3.0-4.2.el8ost.2 RHEA-2023:4580 crudini 0.9.3-1.el8ost.1 RHEA-2023:4580 dib-utils 0.0.11-17.1.20220727214513.51661c3.el8ost RHEA-2023:4580 diskimage-builder 3.29.1-1.20230424094026.ed9bdf8.el8ost RHEA-2023:4580 double-conversion 3.1.5-4.el8ost.2 RHEA-2023:4580 dumb-init 1.2.5-2.el8ost RHEA-2023:4580 facter 3.14.19-1.el8ost RHEA-2023:4580 golang-github-Sirupsen-logrus-devel 1.1.1-7.el8ost RHEA-2023:4580 golang-github-davecgh-go-spew-devel 0-0.12.git6d21280.1.el8ost.1 RHEA-2023:4580 golang-github-go-ini-ini-devel 1.39.3-0.2.gitf55231c.1.el8ost RHEA-2023:4580 golang-github-infrawatch-apputils 0.5-2.git4ffa970.el8ost RHEA-2023:4580 golang-github-pmezard-difflib-devel 1.0.0-6.el8ost RHEA-2023:4580 golang-github-streadway-amqp-devel 0-0.4.20190404git75d898a.el8ost RHEA-2023:4580 golang-github-stretchr-objx-devel 0-0.14.git1a9d0bb.el8ost RHEA-2023:4580 golang-github-stretchr-testify-devel 1.2.2-5.el8ost RHEA-2023:4580 golang-github-urfave-cli-devel 1.20.0-7.el8ost RHEA-2023:4580 golang-github-vbatts-tar-split 0.11.1-9.el8ost RHEA-2023:4580 golang-qpid-apache 0.33.0-4.el8ost RHEA-2023:4580 golang-uber-atomic 1.5.1-1.el8ost.1 RHEA-2023:4580 golang-uber-multierr 1.5.0-1.el8ost.1 RHEA-2023:4580 golang-x-sys-devel 0-0.39.20210715git9b0068b.el8ost RHEA-2023:4580 heat-cfntools 1.4.2-11.el8ost.2 RHEA-2023:4580 hiera 3.6.0-1.el8ost.1 RHEA-2023:4580 leapp-repository-openstack 0.0.12-1.el8ost RHEA-2023:4580 leatherman 1.12.0-5.el8ost.1 RHEA-2023:4580 libcollectdclient 5.12.0-10.el8ost RHEA-2023:4580 liboping 1.10.0-11.el8ost RHEA-2023:4580 libqhull 7.2.1-2.el8ost.1 RHEA-2023:4580 libsodium 1.0.18-2.el8ost.1 RHEA-2023:4580 libzstd 1.4.5-6.el8ost.1 RHEA-2023:4580 openstack-heat-agents 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 openstack-heat-api 16.1.1-1.20230416004033.2d5a87d.el8ost RHEA-2023:4580 openstack-heat-common 16.1.1-1.20230416004033.2d5a87d.el8ost RHEA-2023:4580 openstack-heat-engine 16.1.1-1.20230416004033.2d5a87d.el8ost RHEA-2023:4580 openstack-heat-monolith 16.1.1-1.20230416004033.2d5a87d.el8ost RHEA-2023:4580 openstack-ironic-python-agent-builder 2.8.0-1.20220727181453.e0b51e0.el8ost RHEA-2023:4580 openstack-nova-common 23.2.3-1.20230520123958.el8ost RHEA-2023:4580 openstack-nova-compute 23.2.3-1.20230520123958.el8ost RHEA-2023:4580 openstack-nova-migration 23.2.3-1.20230520123958.el8ost RHEA-2023:4580 openstack-selinux 0.8.37-17.1.20221215084349.34c2ecc.el8ost RHEA-2023:4580 openstack-tempest 33.0.0-1.20230406153813.1580f6f.el8ost RHEA-2023:4580 openstack-tripleo-common 15.4.1-1.20230518205256.el8ost RHEA-2023:4580 openstack-tripleo-common-container-base 15.4.1-1.20230518205256.el8ost RHEA-2023:4580 openstack-tripleo-common-containers 15.4.1-1.20230518205256.el8ost RHEA-2023:4580 openstack-tripleo-heat-templates 14.3.1-1.20230519143971.el8ost RHEA-2023:4580 openstack-tripleo-image-elements 13.1.3-1.20230322223831.a641940.el8ost RHEA-2023:4580 openstack-tripleo-puppet-elements 14.1.3-1.20230221224239.23b0079.el8ost RHEA-2023:4580 openstack-tripleo-validations 14.3.2-1.20230421033956.c768acb.el8ost RHEA-2023:4580 os-apply-config 13.1.0-1.20220727102349.474068e.el8ost RHEA-2023:4580 os-collect-config 13.1.0-1.20220727232946.3a7d05a.el8ost RHEA-2023:4580 os-net-config 14.2.1-1.20230412014753.el8ost RHEA-2023:4580 os-refresh-config 13.1.0-1.20220727215600.5bb536c.el8ost RHEA-2023:4580 plotnetcfg 0.4.1-14.el8ost.1 RHEA-2023:4580 pmu-data 109-3.1.el8ost RHEA-2023:4580 puppet 7.10.0-1.el8ost RHEA-2023:4580 puppet-aodh 18.4.2-1.20230131230427.3e47b5a.el8ost RHEA-2023:4580 puppet-apache 6.5.2-0.20220712154420.e4a1532.el8ost RHEA-2023:4580 puppet-archive 4.6.1-0.20220712084434.bc7e4ff.el8ost RHEA-2023:4580 puppet-auditd 2.2.1-17.1.20220727220254.189b22b.el8ost RHEA-2023:4580 puppet-barbican 18.4.2-1.20221108094553.af6c77b.el8ost RHEA-2023:4580 puppet-ceilometer 18.4.3-1.20230505004947.5050368.el8ost RHEA-2023:4580 puppet-certmonger 2.7.1-0.20220712090641.3e2e660.el8ost RHEA-2023:4580 puppet-cinder 18.5.2-1.20230127003831.6aa60e7.el8ost RHEA-2023:4580 puppet-collectd 13.0.0-1.20230413134008.ad138a7.el8ost RHEA-2023:4580 puppet-concat 6.2.1-0.20220712083648.dfeabb9.el8ost RHEA-2023:4580 puppet-corosync 8.0.1-0.20220712093115.6a9da9a.el8ost RHEA-2023:4580 puppet-designate 18.6.1-1.20230131230703.f4c0b89.el8ost RHEA-2023:4580 puppet-dns 8.2.1-1.20220728005427.70f5b28.el8ost RHEA-2023:4580 puppet-etcd 1.12.3-17.1.20220712095117.e143c2d.el8ost RHEA-2023:4580 puppet-fdio 18.2-17.1.20220727103224.6fd1c8e.el8ost RHEA-2023:4580 puppet-firewall 3.4.1-17.1.20220714134752.94f707c.el8ost RHEA-2023:4580 puppet-git 0.5.0-17.1.20220712100238.4e4498e.el8ost RHEA-2023:4580 puppet-glance 18.6.1-1.20230128003900.81b081d.el8ost RHEA-2023:4580 puppet-gnocchi 18.4.3-1.20230131225420.7584b94.el8ost RHEA-2023:4580 puppet-haproxy 4.2.2-0.20220712100808.a797b8c.el8ost RHEA-2023:4580 puppet-headless 7.10.0-1.el8ost RHEA-2023:4580 puppet-heat 18.4.1-1.20230323234835.3b41bb0.el8ost RHEA-2023:4580 puppet-horizon 18.6.1-1.20230423024131.8074e69.el8ost RHEA-2023:4580 puppet-inifile 4.2.1-0.20220712083557.df46d2a.el8ost RHEA-2023:4580 puppet-ipaclient 2.5.2-17.1.20220712101058.b086731.el8ost RHEA-2023:4580 puppet-ironic 18.7.1-1.20230516013950.edf93f9.el8ost RHEA-2023:4580 puppet-keepalived 0.0.2-17.1.20220712101637.bbca37a.el8ost RHEA-2023:4580 puppet-keystone 18.6.1-1.20230218004339.67ff287.el8ost RHEA-2023:4580 puppet-kmod 2.5.0-0.20220712092606.52e31e3.el8ost RHEA-2023:4580 puppet-manila 18.5.2-1.20230127004755.a72a7d5.el8ost RHEA-2023:4580 puppet-memcached 6.0.0-17.1.20220712090433.4c70dbd.el8ost RHEA-2023:4580 puppet-module-data 0.5.1-17.1.20220712102135.28dafce.el8ost RHEA-2023:4580 puppet-mysql 10.6.1-1.20220714141739.937d044.el8ost RHEA-2023:4580 puppet-neutron 18.6.1-1.20221226014026.el8ost RHEA-2023:4580 puppet-nova 18.6.1-1.20221226010718.65b15ad.el8ost RHEA-2023:4580 puppet-nssdb 1.0.2-17.1.20220712091545.2ed2a2d.el8ost RHEA-2023:4580 puppet-octavia 18.5.1-1.20230201003831.842492c.el8ost RHEA-2023:4580 puppet-openstack_extras 18.5.1-1.20221108102751.504e1a0.el8ost RHEA-2023:4580 puppet-openstacklib 18.5.2-1.20221219234730.64d8ac6.el8ost RHEA-2023:4580 puppet-oslo 18.5.1-1.20221219234629.fe2a147.el8ost RHEA-2023:4580 puppet-ovn 18.6.1-1.20230413013953.7805f7e.el8ost RHEA-2023:4580 puppet-pacemaker 1.5.1-17.1.20221226015058.7add073.el8ost RHEA-2023:4580 puppet-placement 5.4.3-1.20230131232551.e7557a5.el8ost RHEA-2023:4580 puppet-qdr 7.4.1-1.20220728123856.8a575de.el8ost RHEA-2023:4580 puppet-rabbitmq 11.0.1-1.20230428153957.63fee2c.el8ost RHEA-2023:4580 puppet-redis 6.1.1-0.20220712093938.547105e.el8ost RHEA-2023:4580 puppet-remote 10.0.0-17.1.20220712094352.7420908.el8ost RHEA-2023:4580 puppet-rsync 1.1.4-17.1.20220712155729.ea6397e.el8ost RHEA-2023:4580 puppet-rsyslog 4.0.1-1.20220727220448.2548a0d.el8ost RHEA-2023:4580 puppet-snmp 3.9.1-17.1.20220712103531.5d73485.el8ost RHEA-2023:4580 puppet-ssh 6.2.1-0.20220712092028.6e0f430.el8ost RHEA-2023:4580 puppet-stdlib 6.3.1-1.20220728003938.7c1ae25.el8ost RHEA-2023:4580 puppet-swift 18.6.1-1.20221219233951.f105ffc.el8ost RHEA-2023:4580 puppet-sysctl 0.0.13-17.1.20220712085616.847ec1c.el8ost RHEA-2023:4580 puppet-systemd 2.12.1-0.20220712093532.8f68b0d.el8ost RHEA-2023:4580 puppet-tripleo 14.2.3-1.20230521013956.el8ost RHEA-2023:4580 puppet-vcsrepo 3.1.1-0.20220712085822.a36ee18.el8ost RHEA-2023:4580 puppet-vswitch 14.4.2-1.20221109014312.51e82ca.el8ost RHEA-2023:4580 puppet-xinetd 3.3.1-17.1.20220712091204.8d460c4.el8ost RHEA-2023:4580 python-openstackclient-lang 5.5.2-1.20230404113830.42d9b6e.el8ost RHEA-2023:4580 python-oslo-cache-lang 2.7.1-1.20220727175242.d0252f6.el8ost RHEA-2023:4580 python-oslo-concurrency-lang 4.4.0-1.20220727153420.7dcf9e9.el8ost RHEA-2023:4580 python-oslo-db-lang 8.5.2-1.20221011033702.26fd6fb.el8ost RHEA-2023:4580 python-oslo-i18n-lang 5.0.1-1.20220727212022.73187bd.el8ost RHEA-2023:4580 python-oslo-log-lang 4.4.0-1.20230124123758.9b29c90.el8ost RHEA-2023:4580 python-oslo-middleware-lang 4.2.1-1.20220727174253.b40ca5f.el8ost RHEA-2023:4580 python-oslo-policy-lang 3.7.1-1.20220727124740.639b471.el8ost RHEA-2023:4580 python-oslo-privsep-lang 2.5.1-1.20230516124011.1634c00.el8ost RHEA-2023:4580 python-oslo-utils-lang 4.8.2-1.20230201104135.a38b56a.el8ost RHEA-2023:4580 python-oslo-versionedobjects-lang 2.4.1-1.20220727131633.89ff171.el8ost RHEA-2023:4580 python-oslo-vmware-lang 3.8.2-1.20220810165344.dc1a466.el8ost RHEA-2023:4580 python-pycadf-common 3.1.1-1.20220727223847.4179996.el8ost RHEA-2023:4580 python3-GitPython 3.1.14-1.el8ost.1 RHEA-2023:4580 python3-alembic 1.4.3-1.el8ost.1 RHEA-2023:4580 python3-amqp 5.0.6-2.el8ost RHEA-2023:4580 python3-ansible-runner 2.0.0a1-3.el8ost.1 RHEA-2023:4580 python3-anyjson 0.3.3-24.el8ost.1 RHEA-2023:4580 python3-aodhclient 2.2.0-1.20221118093728.b747ae3.el8ost RHEA-2023:4580 python3-appdirs 1.4.4-2.el8ost.1 RHEA-2023:4580 python3-automaton 2.3.1-1.20220727133327.4a3e539.el8ost RHEA-2023:4580 python3-barbicanclient 5.3.0-1.20230509083956.ad49c40.el8ost RHEA-2023:4580 python3-bcrypt 3.1.7-3.el8ost.1 RHEA-2023:4580 python3-beautifulsoup4 4.9.3-1.el8ost.1 RHEA-2023:4580 python3-boto 2.49.0-4.el8ost.1 RHEA-2023:4580 python3-cachetools 4.2.2-1.el8ost.1 RHEA-2023:4580 python3-castellan 3.7.2-1.20220727113716.3775cf4.el8ost RHEA-2023:4580 python3-cffi 1.13.2-1.el8ost.1 RHEA-2023:4580 python3-cinderclient 7.4.1-1.20220727224832.4f72e6f.el8ost RHEA-2023:4580 python3-cliff 3.7.0-1.20220727213806.117a100.el8ost RHEA-2023:4580 python3-cmd2 1.4.0-1.1.el8ost.1 RHEA-2023:4580 python3-collectd-rabbitmq-monitoring 0.0.6-4.el8ost RHEA-2023:4580 python3-colorama 0.4.4-2.el8ost.1 RHEA-2023:4580 python3-contextlib2 0.6.0.post1-1.el8ost.1 RHEA-2023:4580 python3-croniter 0.3.35-1.el8ost.1 RHEA-2023:4580 python3-cursive 0.2.2-17.1.20220727115339.d7cea1f.el8ost RHEA-2023:4580 python3-cycler 0.10.0-13.el8ost.1 RHEA-2023:4580 python3-daemon 2.3.0-1.el8ost.1 RHEA-2023:4580 python3-dataclasses 0.8-1.el8ost.1 RHEA-2023:4580 python3-dateutil 2.8.1-1.el8ost.1 RHEA-2023:4580 python3-ddt 1.4.2-1.el8ost.1 RHEA-2023:4580 python3-debtcollector 2.2.0-1.20220727093815.649189d.el8ost RHEA-2023:4580 python3-decorator 4.4.0-5.el8ost.1 RHEA-2023:4580 python3-defusedxml 0.7.1-1.el8ost.1 RHEA-2023:4580 python3-designateclient 4.2.1-1.20220727191108.7a8d156.el8ost RHEA-2023:4580 python3-dogpile-cache 1.1.5-3.el8ost RHEA-2023:4580 python3-editor 1.0.4-4.el8ost.1 RHEA-2023:4580 python3-entrypoints 0.3-4.el8ost.1 RHEA-2023:4580 python3-etcd3gw 0.2.6-2.el8ost.1 RHEA-2023:4580 python3-eventlet 0.30.2-1.el8ost.1 RHEA-2023:4580 python3-extras 1.0.0-12.el8ost RHEA-2023:4580 python3-fasteners 0.14.1-21.el8ost.1 RHEA-2023:4580 python3-fixtures 3.0.0-16.el8ost.1 RHEA-2023:4580 python3-flake8 3.7.7-6.el8ost.1 RHEA-2023:4580 python3-future 0.18.2-3.el8ost.1 RHEA-2023:4580 python3-futurist 2.3.0-1.20220727101600.1a1c6f8.el8ost RHEA-2023:4580 python3-gitdb 4.0.5-2.el8ost RHEA-2023:4580 python3-glanceclient 3.3.0-1.20221115174113.f802c71.el8ost RHEA-2023:4580 python3-gnocchiclient 7.0.7-1.20220718110604.cad2c27.el8ost RHEA-2023:4580 python3-greenlet 1.0.0-1.el8ost RHEA-2023:4580 python3-heat-agent 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-ansible 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-apply-config 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-docker-cmd 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-hiera 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-json-file 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heat-agent-puppet 2.2.1-1.20220728000604.ed16cc7.el8ost RHEA-2023:4580 python3-heatclient 2.3.1-1.20220727134220.d16c245.el8ost RHEA-2023:4580 python3-importlib-metadata 1.7.0-1.el8ost.1 RHEA-2023:4580 python3-ironic-inspector-client 4.5.0-1.20220727111646.3c03e21.el8ost RHEA-2023:4580 python3-ironicclient 4.6.4-1.20221027064127.09b78fa.el8ost RHEA-2023:4580 python3-iso8601 0.1.12-8.el8ost.1 RHEA-2023:4580 python3-jeepney 0.6.0-2.el8ost.2 RHEA-2023:4580 python3-jsonschema 3.2.0-5.el8ost.1 RHEA-2023:4580 python3-junitxml 0.7-26.el8ost.1 RHEA-2023:4580 python3-kazoo 2.8.0-1.el8ost.1 RHEA-2023:4580 python3-keyring 21.8.0-2.el8ost.1 RHEA-2023:4580 python3-keystoneauth1 4.4.0-1.20220727222004.112bcae.el8ost RHEA-2023:4580 python3-keystoneclient 4.3.0-1.20220727200548.d5cb761.el8ost RHEA-2023:4580 python3-keystonemiddleware 9.2.0-1.20220727112636.3659bda.el8ost RHEA-2023:4580 python3-kiwisolver 1.1.0-4.el8ost.1 RHEA-2023:4580 python3-kombu 5.0.2-1.el8ost.1 RHEA-2023:4580 python3-linecache2 1.0.0-25.el8ost.1 RHEA-2023:4580 python3-lockfile 0.12.2-2.el8ost.1 RHEA-2023:4580 python3-logutils 0.3.5-12.el8ost RHEA-2023:4580 python3-magnumclient 3.4.1-1.20230105113801.280acd2.el8ost RHEA-2023:4580 python3-manilaclient 2.6.4-1.20220727154316.7f7d7d3.el8ost RHEA-2023:4580 python3-markupsafe 1.1.0-7.el8ost.1 RHEA-2023:4580 python3-matplotlib 3.1.1-2.el8ost.1 RHEA-2023:4580 python3-matplotlib-data 3.1.1-2.el8ost.1 RHEA-2023:4580 python3-matplotlib-data-fonts 3.1.1-2.el8ost.1 RHEA-2023:4580 python3-matplotlib-tk 3.1.1-2.el8ost.1 RHEA-2023:4580 python3-mccabe 0.6.1-14.el8ost.1 RHEA-2023:4580 python3-memcached 1.59-2.el8ost RHEA-2023:4580 python3-metalsmith 1.4.4-1.20230515184003.5e7461e.el8ost RHEA-2023:4580 python3-microversion-parse 1.0.1-1.20220727102336.2c36df6.el8ost RHEA-2023:4580 python3-migrate 0.13.0-6.el8ost.1 RHEA-2023:4580 python3-mistralclient 4.2.0-1.20220727231108.20a10f0.el8ost RHEA-2023:4580 python3-monotonic 1.5-9.el8ost RHEA-2023:4580 python3-msgpack 1.0.2-1.el8ost.1 RHEA-2023:4580 python3-munch 2.5.0-3.el8ost.1 RHEA-2023:4580 python3-natsort 7.1.1-2.el8ost.1 RHEA-2023:4580 python3-netifaces 0.10.9-10.el8ost RHEA-2023:4580 python3-networkx 2.5-2.el8ost.1 RHEA-2023:4580 python3-neutron-lib 2.10.2-1.20230510084012.el8ost RHEA-2023:4580 python3-neutron-tests-tempest 2.1.0-1.20230508093958.el8ost RHEA-2023:4580 python3-neutronclient 7.3.1-1.20221110063715.29a9f5e.el8ost RHEA-2023:4580 python3-nova 23.2.3-1.20230520123958.el8ost RHEA-2023:4580 python3-novaclient 17.4.1-1.20220917063708.5ee4427.el8ost RHEA-2023:4580 python3-numpy 1.17.0-11.el8ost RHEA-2023:4580 python3-numpy-f2py 1.17.0-11.el8ost RHEA-2023:4580 python3-octaviaclient 2.3.1-1.20220727225443.51347bc.el8ost RHEA-2023:4580 python3-openstackclient 5.5.2-1.20230404113830.42d9b6e.el8ost RHEA-2023:4580 python3-openstacksdk 0.55.1-1.20220727181957.f09ed4a.el8ost RHEA-2023:4580 python3-os-brick 4.3.4-1.20230519140256.cf69f92.el8ost RHEA-2023:4580 python3-os-client-config 2.1.0-1.20220727222853.bc96c23.el8ost RHEA-2023:4580 python3-os-ken 1.4.1-1.20220727125626.018d755.el8ost RHEA-2023:4580 python3-os-resource-classes 1.0.0-1.20220727233914.3dd3506.el8ost RHEA-2023:4580 python3-os-service-types 1.7.0-17.1.20220727104045.0b2f473.el8ost RHEA-2023:4580 python3-os-testr 2.0.0-1.20220727214705.248dc81.el8ost RHEA-2023:4580 python3-os-traits 2.5.0-1.20220727231953.ac1b39e.el8ost RHEA-2023:4580 python3-os-vif 2.4.1-1.20221116073809.el8ost RHEA-2023:4580 python3-os-win 5.4.0-1.20220727114255.cce95b4.el8ost RHEA-2023:4580 python3-osc-lib 2.3.1-1.20220727105040.2b7a679.el8ost RHEA-2023:4580 python3-osc-placement 2.2.0-1.20220727235707.f7640c9.el8ost RHEA-2023:4580 python3-oslo-cache 2.7.1-1.20220727175242.d0252f6.el8ost RHEA-2023:4580 python3-oslo-concurrency 4.4.0-1.20220727153420.7dcf9e9.el8ost RHEA-2023:4580 python3-oslo-config 8.5.1-1.20230124123758.de1dbee.el8ost RHEA-2023:4580 python3-oslo-context 3.2.1-1.20220727110908.b124eb7.el8ost RHEA-2023:4580 python3-oslo-db 8.5.2-1.20221011033702.26fd6fb.el8ost RHEA-2023:4580 python3-oslo-i18n 5.0.1-1.20220727212022.73187bd.el8ost RHEA-2023:4580 python3-oslo-log 4.4.0-1.20230124123758.9b29c90.el8ost RHEA-2023:4580 python3-oslo-messaging 12.7.3-1.20221212164246.5d6fd1a.el8ost RHEA-2023:4580 python3-oslo-middleware 4.2.1-1.20220727174253.b40ca5f.el8ost RHEA-2023:4580 python3-oslo-policy 3.7.1-1.20220727124740.639b471.el8ost RHEA-2023:4580 python3-oslo-privsep 2.5.1-1.20230516124011.1634c00.el8ost RHEA-2023:4580 python3-oslo-reports 2.2.0-1.20220727223653.bc631ae.el8ost RHEA-2023:4580 python3-oslo-rootwrap 6.3.1-1.20220727132540.1b1b960.el8ost RHEA-2023:4580 python3-oslo-serialization 4.1.1-1.20220727132453.bbe5d5a.el8ost RHEA-2023:4580 python3-oslo-service 2.5.1-1.20230124124620.c1e3398.el8ost RHEA-2023:4580 python3-oslo-upgradecheck 1.3.1-1.20220727131612.9561ecb.el8ost RHEA-2023:4580 python3-oslo-utils 4.8.2-1.20230201104135.a38b56a.el8ost RHEA-2023:4580 python3-oslo-versionedobjects 2.4.1-1.20220727131633.89ff171.el8ost RHEA-2023:4580 python3-oslo-vmware 3.8.2-1.20220810165344.dc1a466.el8ost RHEA-2023:4580 python3-osprofiler 3.4.0-1.20220727114659.5d82a02.el8ost RHEA-2023:4580 python3-ovsdbapp 1.9.4-1.20221108163737.65d02f0.el8ost RHEA-2023:4580 python3-packaging 20.4-1.el8ost.1 RHEA-2023:4580 python3-paramiko 2.7.2-2.el8ost.1 RHEA-2023:4580 python3-passlib 1.7.4-1.el8ost.1 RHEA-2023:4580 python3-paste 3.5.0-1.el8ost.1 RHEA-2023:4580 python3-paste-deploy 2.1.1-1.el8ost.1 RHEA-2023:4580 python3-pbr 5.5.1-1.el8ost.1 RHEA-2023:4580 python3-pecan 1.4.0-2.el8ost.2 RHEA-2023:4580 python3-pexpect 4.7.0-4.el8ost.1 RHEA-2023:4580 python3-psutil 5.7.3-1.el8ost.2 RHEA-2023:4580 python3-pyasn1 0.4.6-3.el8ost.2 RHEA-2023:4580 python3-pyasn1-modules 0.4.6-3.el8ost.2 RHEA-2023:4580 python3-pycadf 3.1.1-1.20220727223847.4179996.el8ost RHEA-2023:4580 python3-pycodestyle 2.5.0-6.el8ost.1 RHEA-2023:4580 python3-pydot 1.4.1-2.el8ost.1 RHEA-2023:4580 python3-pyflakes 2.1.1-5.el8ost.1 RHEA-2023:4580 python3-pygraphviz 1.5-9.el8ost.1 RHEA-2023:4580 python3-pymemcache 3.5.0-2.el8ost RHEA-2023:4580 python3-pynacl 1.4.0-1.el8ost.1 RHEA-2023:4580 python3-pyparsing 2.4.6-1.el8ost.1 RHEA-2023:4580 python3-pyperclip 1.8.0-2.el8ost.1 RHEA-2023:4580 python3-pyrabbit2 1.0.6-3.el8ost RHEA-2023:4580 python3-pyroute2 0.6.6-1.el8ost RHEA-2023:4580 python3-pyrsistent 0.17.3-1.el8ost.1 RHEA-2023:4580 python3-pystache 0.5.4-13.el8ost.1 RHEA-2023:4580 python3-pyyaml 5.4.1-2.el8ost.1 RHEA-2023:4580 python3-redis 3.5.3-1.el8ost.1 RHEA-2023:4580 python3-repoze-lru 0.7-6.el8ost.1 RHEA-2023:4580 python3-requests 2.25.1-1.el8ost.1 RHEA-2023:4580 python3-requestsexceptions 1.4.0-17.1.20220727093815.d7ac0ff.el8ost RHEA-2023:4580 python3-retrying 1.3.3-1.el8ost.1 RHEA-2023:4580 python3-rfc3986 1.4.0-3.el8ost.1 RHEA-2023:4580 python3-rhosp-openvswitch 3.1-1.el8ost RHEA-2023:4580 python3-routes 2.4.1-17.el8ost.1 RHEA-2023:4580 python3-rsa 4.6-3.el8ost.1 RHEA-2023:4580 python3-saharaclient 3.3.0-1.20220727120214.401e663.el8ost RHEA-2023:4580 python3-secretstorage 3.3.1-1.el8ost.1 RHEA-2023:4580 python3-setproctitle 1.2.2-1.el8ost.1 RHEA-2023:4580 python3-shade 1.33.0-1.20220727235444.e7c7f29.el8ost RHEA-2023:4580 python3-simplegeneric 0.8.1-17.el8ost.1 RHEA-2023:4580 python3-simplejson 3.17.5-1.el8ost.1 RHEA-2023:4580 python3-six 1.15.0-2.el8ost.1 RHEA-2023:4580 python3-smmap 3.0.1-4.el8ost RHEA-2023:4580 python3-soupsieve 2.2-1.el8ost.1 RHEA-2023:4580 python3-sqlalchemy-collectd 0.0.6-2.el8ost RHEA-2023:4580 python3-sqlparse 0.4.1-1.el8ost.1 RHEA-2023:4580 python3-statsd 3.2.1-16.el8ost.1 RHEA-2023:4580 python3-stestr 2.6.0-4.el8ost.1 RHEA-2023:4580 python3-stevedore 3.3.3-1.20221025091001.7b48fff.el8ost RHEA-2023:4580 python3-subunit 1.4.0-6.el8ost.1 RHEA-2023:4580 python3-swiftclient 3.11.1-1.20220727110024.06b36ae.el8ost RHEA-2023:4580 python3-taskflow 4.6.0-1.20220727230513.0f7c6e9.el8ost RHEA-2023:4580 python3-tempest 33.0.0-1.20230406153813.1580f6f.el8ost RHEA-2023:4580 python3-tempestconf 3.3.1-1.20221216123732.29cc500.el8ost RHEA-2023:4580 python3-tempita 0.5.1-25.el8ost.1 RHEA-2023:4580 python3-tenacity 6.3.1-1.el8ost.1 RHEA-2023:4580 python3-testscenarios 0.5.0-17.el8ost.1 RHEA-2023:4580 python3-testtools 2.5.0-2.el8ost RHEA-2023:4580 python3-tinyrpc 1.0.3-1.el8ost.1 RHEA-2023:4580 python3-tooz 2.8.3-1.20220810165244.73dbe0e.el8ost RHEA-2023:4580 python3-traceback2 1.4.0-25.el8ost.1 RHEA-2023:4580 python3-tripleo-common 15.4.1-1.20230518205256.el8ost RHEA-2023:4580 python3-tripleoclient 16.5.1-1.20230505004752.el8ost RHEA-2023:4580 python3-troveclient 7.0.0-1.20220727231400.c7319d8.el8ost RHEA-2023:4580 python3-ujson 2.0.3-3.el8ost RHEA-2023:4580 python3-unittest2 1.1.0-24.el8ost.1 RHEA-2023:4580 python3-urllib-gssapi 1.0.1-11.el8ost RHEA-2023:4580 python3-urllib3 1.25.10-4.el8ost.1 RHEA-2023:4580 python3-validations-libs 1.9.1-1.20230421024008.7213419.el8ost RHEA-2023:4580 python3-vine 5.0.0-2.1.el8ost RHEA-2023:4580 python3-voluptuous 0.12.1-1.el8ost.1 RHEA-2023:4580 python3-waitress 2.0.0-3.el8ost RHEA-2023:4580 python3-warlock 1.3.3-1.el8ost.1 RHEA-2023:4580 python3-wcwidth 0.2.5-2.el8ost.1 RHEA-2023:4580 python3-webob 1.8.7-1.el8ost.1 RHEA-2023:4580 python3-webtest 2.0.35-3.el8ost.1 RHEA-2023:4580 python3-werkzeug 1.0.1-3.el8ost.1 RHEA-2023:4580 python3-wrapt 1.12.1-3.el8ost.1 RHEA-2023:4580 python3-yappi 1.3.1-2.el8ost.1 RHEA-2023:4580 python3-yaql 1.1.3-10.el8ost.1 RHEA-2023:4580 python3-zake 0.2.2-19.el8ost.1 RHEA-2023:4580 python3-zaqarclient 2.3.0-1.20220727225720.e388947.el8ost RHEA-2023:4580 python3-zipp 3.4.0-1.el8ost.1 RHEA-2023:4580 qpid-proton-c 0.32.0-2.el8 RHEA-2023:4580 qpid-proton-c-devel 0.32.0-2.el8 RHEA-2023:4580 rhosp-network-scripts-openvswitch 3.1-1.el8ost RHEA-2023:4580 rhosp-openvswitch 3.1-1.el8ost RHEA-2023:4580 rhosp-release 17.1.0-1.el8ost RHEA-2023:4580 ruby-augeas 0.5.0-23.el8ost.1 RHEA-2023:4580 ruby-facter 3.14.19-1.el8ost RHEA-2023:4580 ruby-shadow 2.5.0-8.el8ost RHEA-2023:4580 rubygem-concurrent-ruby 1.1.5-2.el8ost.1 RHEA-2023:4580 rubygem-deep_merge 1.2.1-4.el8ost.1 RHEA-2023:4580 rubygem-fast_gettext 1.2.0-9.el8ost.1 RHEA-2023:4580 rubygem-hocon 1.3.1-2.el8ost RHEA-2023:4580 rubygem-multi_json 1.15.0-2.el8ost RHEA-2023:4580 rubygem-puppet-resource_api 1.8.13-1.el8ost RHEA-2023:4580 rubygem-semantic_puppet 1.0.4-2.el8ost RHEA-2023:4580 subunit-filters 1.4.0-6.el8ost.1 RHEA-2023:4580 tripleo-ansible 3.3.1-1.20230521003956.el8ost RHEA-2023:4580 validations-common 1.9.1-1.20230406033859.e96c169.el8ost RHEA-2023:4580 yaml-cpp 0.6.3-4.el8ost.1 RHEA-2023:4580 zstd 1.4.5-6.el8ost.1 RHEA-2023:4580 | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/package_manifest/openstack-17.1-for-rhel-8-x86_64-rpms_2023-08-16 |
Chapter 7. Discarding unused blocks | Chapter 7. Discarding unused blocks You can perform or schedule discard operations on block devices that support them. The block discard operation communicates to the underlying storage which file system blocks are no longer in use by the mounted file system. Block discard operations allow SSDs to optimize garbage collection routines, and they can inform thinly-provisioned storage to repurpose unused physical blocks. Requirements The block device underlying the file system must support physical discard operations. Physical discard operations are supported if the value in the /sys/block/ <device> /queue/discard_max_bytes file is not zero. 7.1. Types of block discard operations You can run discard operations using different methods: Batch discard Is triggered explicitly by the user and discards all unused blocks in the selected file systems. Online discard Is specified at mount time and triggers in real time without user intervention. Online discard operations discard only blocks that are transitioning from the used to the free state. Periodic discard Are batch operations that are run regularly by a systemd service. All types are supported by the XFS and ext4 file systems. Recommendations Red Hat recommends that you use batch or periodic discard. Use online discard only if: the system's workload is such that batch discard is not feasible, or online discard operations are necessary to maintain performance. 7.2. Performing batch block discard You can perform a batch block discard operation to discard unused blocks on a mounted file system. Prerequisites The file system is mounted. The block device underlying the file system supports physical discard operations. Procedure Use the fstrim utility: To perform discard only on a selected file system, use: To perform discard on all mounted file systems, use: If you execute the fstrim command on: a device that does not support discard operations, or a logical device (LVM or MD) composed of multiple devices, where any one of the device does not support discard operations, the following message displays: Additional resources fstrim(8) man page on your system 7.3. Enabling online block discard You can perform online block discard operations to automatically discard unused blocks on all supported file systems. Procedure Enable online discard at mount time: When mounting a file system manually, add the -o discard mount option: When mounting a file system persistently, add the discard option to the mount entry in the /etc/fstab file. Additional resources mount(8) and fstab(5) man pages on your system 7.4. Enabling online block discard by using the storage RHEL system role You can mount an XFS file system with the online block discard option to automatically discard unused blocks. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that online block discard option is enabled: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 7.5. Enabling periodic block discard You can enable a systemd timer to regularly discard unused blocks on all supported file systems. Procedure Enable and start the systemd timer: Verification Verify the status of the timer: | [
"fstrim mount-point",
"fstrim --all",
"fstrim /mnt/non_discard fstrim: /mnt/non_discard : the discard operation is not supported",
"mount -o discard device mount-point",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'",
"systemctl enable --now fstrim.timer Created symlink /etc/systemd/system/timers.target.wants/fstrim.timer /usr/lib/systemd/system/fstrim.timer.",
"systemctl status fstrim.timer fstrim.timer - Discard unused blocks once a week Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: disabled) Active: active (waiting) since Wed 2023-05-17 13:24:41 CEST; 3min 15s ago Trigger: Mon 2023-05-22 01:20:46 CEST; 4 days left Docs: man:fstrim May 17 13:24:41 localhost.localdomain systemd[1]: Started Discard unused blocks once a week."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/discarding-unused-blocks_managing-storage-devices |
Chapter 7. MachineConfig [machineconfiguration.openshift.io/v1] | Chapter 7. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigSpec is the spec for MachineConfig 7.1.1. .spec Description MachineConfigSpec is the spec for MachineConfig Type object Property Type Description baseOSExtensionsContainerImage string baseOSExtensionsContainerImage specifies the remote location that will be used to fetch the extensions container matching a new-format OS image config `` Config is a Ignition Config object. extensions `` List of additional features that can be enabled on host fips boolean FIPS controls FIPS mode kernelArguments `` KernelArguments contains a list of kernel arguments to be added kernelType string Contains which kernel we want to be running like default (traditional), realtime osImageURL string OSImageURL specifies the remote location that will be used to fetch the OS 7.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigs DELETE : delete collection of MachineConfig GET : list objects of kind MachineConfig POST : create a MachineConfig /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} DELETE : delete a MachineConfig GET : read the specified MachineConfig PATCH : partially update the specified MachineConfig PUT : replace the specified MachineConfig 7.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigs Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineConfig Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfig Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfig Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body MachineConfig schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 202 - Accepted MachineConfig schema 401 - Unauthorized Empty 7.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the MachineConfig Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineConfig Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfig Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfig Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfig Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body MachineConfig schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/machineconfig-machineconfiguration-openshift-io-v1 |
2.8. The Storage Pool Manager | 2.8. The Storage Pool Manager Red Hat Virtualization uses metadata to describe the internal structure of storage domains. Structural metadata is written to a segment of each storage domain. Hosts work with the storage domain metadata based on a single writer, and multiple readers configuration. Storage domain structural metadata tracks image and snapshot creation and deletion, and volume and domain extension. The host that can make changes to the structure of the data domain is known as the Storage Pool Manager (SPM). The SPM coordinates all metadata changes in the data center, such as creating and deleting disk images, creating and merging snapshots, copying images between storage domains, creating templates and storage allocation for block devices. There is one SPM for every data center. All other hosts can only read storage domain structural metadata. A host can be manually selected as the SPM, or it can be assigned by the Red Hat Virtualization Manager. The Manager assigns the SPM role by causing a potential SPM host to attempt to assume a storage-centric lease. The lease allows the SPM host to write storage metadata. It is storage-centric because it is written to the storage domain rather than being tracked by the Manager or hosts. Storage-centric leases are written to a special logical volume in the master storage domain called leases . Metadata about the structure of the data domain is written to a special logical volume called metadata . The leases logical volume protects the metadata logical volume from changes. The Manager uses VDSM to issue the spmStart command to a host, causing VDSM on that host to attempt to assume the storage-centric lease. If the host is successful it becomes the SPM and retains the storage-centric lease until the Red Hat Virtualization Manager requests that a new host assume the role of SPM. The Manager moves the SPM role to another host if: The SPM host can not access all storage domains, but can access the master storage domain The SPM host is unable to renew the lease because of a loss of storage connectivity or the lease volume is full and no write operation can be performed The SPM host crashes Figure 2.1. The Storage Pool Manager Exclusively Writes Structural Metadata. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/role_the_storage_pool_manager |
Chapter 4. Deprecated features | Chapter 4. Deprecated features The features deprecated in this release, and that were supported in releases of AMQ Streams, are outlined below. 4.1. Kafka Connect with Source-to-Image (S2I) AMQ Streams 1.7 introduces build configuration to the KafkaConnect resource, as described in Chapter 1, Features . With the introduction of build configuration to the KafkaConnect resource, AMQ Streams can now automatically build a container image with the connector plugins you require for your data connections. As a result, support for Kafka Connect with Source-to-Image (S2I) is deprecated. To prepare for this change, you can migrate Kafka Connect S2I instances to Kafka Connect instances. See Migrating from Kafka Connect with S2I to Kafka Connect 4.2. Metrics configuration Metrics configuration is now specified as a ConfigMap for Kafka components. Previously, the spec.metrics property was used. To update the configuration, and enable Prometheus metrics export, a new ConfigMap must be created that matches the configuration for the .spec.metrics property. The .spec.metricsConfig property is used to specify the ConfigMap, as described in Chapter 1, Features . See Upgrading AMQ Streams 4.3. API versions The introduction of v1beta2 updates the schemas of the custom resources. Older API versions are deprecated. The v1alpha1 API version is deprecated for the following AMQ Streams custom resources: Kafka KafkaConnect KafkaConnectS2I KafkaConnector KafkaMirrorMaker KafkaMirrorMaker2 KafkaTopic KafkaUser KafkaBridge KafkaRebalance The v1beta1 API version is deprecated for the following AMQ Streams custom resources: Kafka KafkaConnect KafkaConnectS2I KafkaMirrorMaker KafkaTopic KafkaUser Important The v1alpha1 and v1beta1 versions will be removed in the minor release. See AMQ Streams custom resource upgrades . 4.4. Annotations The following annotations are deprecated, and will be removed in AMQ Streams 1.8.0: Table 4.1. Deprecated annotations and their replacements Deprecated annotation Replacement annotation cluster.operator.strimzi.io/delete-claim strimzi.io/delete-claim (Internal) operator.strimzi.io/generation strimzi.io/generation (Internal) operator.strimzi.io/delete-pod-and-pvc strimzi.io/delete-pod-and-pvc operator.strimzi.io/manual-rolling-update strimzi.io/manual-rolling-update | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/release_notes_for_amq_streams_1.7_on_openshift/deprecated-features-str |
Chapter 16. Deleting applications | Chapter 16. Deleting applications You can delete applications created in your project. 16.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/building_applications/odc-deleting-applications |
C.5. Debugging and Testing Services and Resource Ordering | C.5. Debugging and Testing Services and Resource Ordering You can debug and test services and resource ordering with the rg_test utility. rg_test is a command-line utility provided by the rgmanager package that is run from a shell or a terminal (it is not available in Conga ). Table C.2, " rg_test Utility Summary" summarizes the actions and syntax for the rg_test utility. Table C.2. rg_test Utility Summary Action Syntax Display the resource rules that rg_test understands. rg_test rules Test a configuration (and /usr/share/cluster) for errors or redundant resource agents. rg_test test /etc/cluster/cluster.conf Display the start and stop ordering of a service. Display start order: rg_test noop /etc/cluster/cluster.conf start service servicename Display stop order: rg_test noop /etc/cluster/cluster.conf stop service servicename Explicitly start or stop a service. Important Only do this on one node, and always disable the service in rgmanager first. Start a service: rg_test test /etc/cluster/cluster.conf start service servicename Stop a service: rg_test test /etc/cluster/cluster.conf stop service servicename Calculate and display the resource tree delta between two cluster.conf files. rg_test delta cluster.conf file 1 cluster.conf file 2 For example: rg_test delta /etc/cluster/cluster.conf.bak /etc/cluster/cluster.conf | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-rsc-testing-config-CA |
Appendix B. ASN.1 and Distinguished Names | Appendix B. ASN.1 and Distinguished Names Abstract The OSI Abstract Syntax Notation One (ASN.1) and X.500 Distinguished Names play an important role in the security standards that define X.509 certificates and LDAP directories. B.1. ASN.1 Overview The Abstract Syntax Notation One (ASN.1) was defined by the OSI standards body in the early 1980s to provide a way of defining data types and structures that are independent of any particular machine hardware or programming language. In many ways, ASN.1 can be considered a forerunner of modern interface definition languages, such as the OMG's IDL and WSDL, which are concerned with defining platform-independent data types. ASN.1 is important, because it is widely used in the definition of standards (for example, SNMP, X.509, and LDAP). In particular, ASN.1 is ubiquitous in the field of security standards. The formal definitions of X.509 certificates and distinguished names are described using ASN.1 syntax. You do not require detailed knowledge of ASN.1 syntax to use these security standards, but you need to be aware that ASN.1 is used for the basic definitions of most security-related data types. BER The OSI's Basic Encoding Rules (BER) define how to translate an ASN.1 data type into a sequence of octets (binary representation). The role played by BER with respect to ASN.1 is, therefore, similar to the role played by GIOP with respect to the OMG IDL. DER The OSI's Distinguished Encoding Rules (DER) are a specialization of the BER. The DER consists of the BER plus some additional rules to ensure that the encoding is unique (BER encodings are not). References You can read more about ASN.1 in the following standards documents: ASN.1 is defined in X.208. BER is defined in X.209. B.2. Distinguished Names Overview Historically, distinguished names (DN) are defined as the primary keys in an X.500 directory structure. However, DNs have come to be used in many other contexts as general purpose identifiers. In Apache CXF, DNs occur in the following contexts: X.509 certificates-for example, one of the DNs in a certificate identifies the owner of the certificate (the security principal). LDAP-DNs are used to locate objects in an LDAP directory tree. String representation of DN Although a DN is formally defined in ASN.1, there is also an LDAP standard that defines a UTF-8 string representation of a DN (see RFC 2253 ). The string representation provides a convenient basis for describing the structure of a DN. Note The string representation of a DN does not provide a unique representation of DER-encoded DN. Hence, a DN that is converted from string format back to DER format does not always recover the original DER encoding. DN string example The following string is a typical example of a DN: Structure of a DN string A DN string is built up from the following basic elements: OID . Attribute Types . AVA . RDN . OID An OBJECT IDENTIFIER (OID) is a sequence of bytes that uniquely identifies a grammatical construct in ASN.1. Attribute types The variety of attribute types that can appear in a DN is theoretically open-ended, but in practice only a small subset of attribute types are used. Table B.1, "Commonly Used Attribute Types" shows a selection of the attribute types that you are most likely to encounter: Table B.1. Commonly Used Attribute Types String Representation X.500 Attribute Type Size of Data Equivalent OID C countryName 2 2.5.4.6 O organizationName 1... 64 2.5.4.10 OU organizationalUnitName 1... 64 2.5.4.11 CN commonName 1... 64 2.5.4.3 ST stateOrProvinceName 1... 64 2.5.4.8 L localityName 1... 64 2.5.4.7 STREET streetAddress DC domainComponent UID userid AVA An attribute value assertion (AVA) assigns an attribute value to an attribute type. In the string representation, it has the following syntax: For example: Alternatively, you can use the equivalent OID to identify the attribute type in the string representation (see Table B.1, "Commonly Used Attribute Types" ). For example: RDN A relative distinguished name (RDN) represents a single node of a DN (the bit that appears between the commas in the string representation). Technically, an RDN might contain more than one AVA (it is formally defined as a set of AVAs). However, this almost never occurs in practice. In the string representation, an RDN has the following syntax: Here is an example of a (very unlikely) multiple-value RDN: Here is an example of a single-value RDN: | [
"C=US,O=IONA Technologies,OU=Engineering,CN=A. N. Other",
"<attr-type> = <attr-value>",
"CN=A. N. Other",
"2.5.4.3=A. N. Other",
"<attr-type> = <attr-value>[ + <attr-type> =<attr-value> ...]",
"OU=Eng1+OU=Eng2+OU=Eng3",
"OU=Engineering"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/dn |
Chapter 5. View OpenShift Data Foundation Topology | Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_power/viewing-odf-topology_mcg-verify |
13.5. Setting Search Attributes for Users and User Groups | 13.5. Setting Search Attributes for Users and User Groups When searching entries for a specified keyword using the ipa user-find keyword and ipa group-find keyword commands, IdM only searches certain attributes. Most notably: In user searches: first name, last name, user name (login ID), job title, organization unit, phone number, UID, email address. In group searches: group name, description. The following procedure shows how to configure IdM to search other attributes as well. Note that IdM always searches the default attributes. For example, even if you remove the job title attribute from the list of user search attributes, IdM will still search user titles. Prerequisites Before adding a new attribute, make sure that a corresponding index exists within the LDAP directory for this attribute. Most standard LDAP attributes have indexes in LDAP, but if you want to add a custom attribute, you must create an index manually. See Creating Standard Indexes in the Red Hat Directory Server 10 Administration Guide . Web UI: Setting Search Attributes Select IPA Server Configuration . In the User Options area, set the user search attributes in User search fields . In the Group Options area, set the group search attributes in Group search fields . Click Save at the top of the page. Command Line: Setting Search Attributes Use the ipa config-mod command with these options: --usersearch defines a new list of search attributes for users --groupsearch defines a new list of search attributes for groups For example: | [
"ipa config-mod --usersearch=\"uid,givenname,sn,telephonenumber,ou,title\" ipa config-mod --groupsearch=\"cn,description\""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/users-search-entries-attributes |
18.2. Cache Store Configuration Details (Library Mode) | 18.2. Cache Store Configuration Details (Library Mode) The following lists contain details about the configuration elements and parameters for cache store elements in JBoss Data Grid's Library mode: The namedCache Element Add the name value to the name parameter to set the name of the cache store. The persistence Element The passivation parameter affects the way in which Red Hat JBoss Data Grid interacts with stores. When an object is evicted from in-memory cache, passivation writes it to a secondary data store, such as a system or a database. Valid values for this parameter are true and false but passivation is set to false by default. The singleFile Element The shared parameter indicates that the cache store is shared by different cache instances. For example, where all instances in a cluster use the same JDBC settings to talk to the same remote, shared database. shared is false by default. When set to true , it prevents duplicate data being written to the cache store by different cache instances. For the LevelDB cache stores, this parameter must be excluded from the configuration, or set to false because sharing this cache store is not supported. The preload parameter is set to false by default. When set to true the data stored in the cache store is preloaded into the memory when the cache starts. This allows data in the cache store to be available immediately after startup and avoids cache operations delays as a result of loading data lazily. Preloaded data is only stored locally on the node, and there is no replication or distribution of the preloaded data. Red Hat JBoss Data Grid will only preload up to the maximum configured number of entries in eviction. The fetchPersistentState parameter determines whether or not to fetch the persistent state of a cache and apply it to the local cache store when joining the cluster. If the cache store is shared the fetch persistent state is ignored, as caches access the same cache store. A configuration exception will be thrown when starting the cache service if more than one cache store has this property set to true . The fetchPersistentState property is false by default. The ignoreModifications parameter determines whether modification methods are applied to the specific cache store. This allows write operations to be applied to the local file cache store, but not the shared cache store. In some cases, transient application data should only reside in a file-based cache store on the same server as the in-memory cache. For example, this would apply with a further JDBC based cache store used by all servers in the network. ignoreModifications is false by default. The maxEntries parameter provides maximum number of entries allowed. The default value is -1 for unlimited entries. The maxKeysInMemory parameter is used to speed up data lookup. The single file store keeps an index of keys and their positions in the file, restricting the size of the index using the maxKeysInMemory parameter. The default value for this parameter is -1. The purgeOnStartup parameter controls whether cache store is purged when it starts up. The location configuration element sets a location on disk where the store can write. The async Element The async element contains parameters that configure various aspects of the cache store. The enabled parameter determines whether the file store is asynchronous. The threadPoolSize parameter specifies the number of threads that concurrently apply modifications to the store. The default value for this parameter is 1 . The flushLockTimeout parameter specifies the time to acquire the lock which guards the state to be flushed to the cache store periodically. The default value for this parameter is 1 . The modificationQueueSize parameter specifies the size of the modification queue for the asynchronous store. If updates are made at a rate that is faster than the underlying cache store can process this queue, then the asynchronous store behaves like a synchronous store for that period, blocking until the queue can accept more elements. The default value for this parameter is 1024 elements. The shutdownTimeout parameter specifies maximum amount of time that can be taken to stop the cache store. Default value for this parameter is 25 seconds. The singleton Element The singleton element enables modifications to be stored by only one node in the cluster. This node is called the coordinator. The coordinator pushes the caches in-memory states to disk. The shared element cannot be defined with singleton enabled at the same time. The enabled attribute determines whether this feature is enabled. Valid values for this parameter are true and false . The enabled attribute is set to false by default. The pushStateWhenCoordinator parameter is set to true by default. If true , this property causes a node that has become the coordinator to transfer in-memory state to the underlying cache store. This parameter is useful where the coordinator has crashed and a new coordinator is elected. When pushStateWhenCoordinator is set to true , the pushStateTimeout parameter sets the maximum number of milliseconds that the process pushing the in-memory state to the underlying cache loader can take. The default time for this parameter is 10 seconds. The remoteStore Element The remoteCacheName attribute specifies the name of the remote cache to which it intends to connect in the remote Infinispan cluster. The default cache will be used if the remote cache name is unspecified. The fetchPersistentState attribute, when set to true , ensures that the persistent state is fetched when the remote cache joins the cluster. If multiple cache stores are chained, only one cache store can have this property set to true . The default for this value is false . The shared attribute is set to true when multiple cache instances share a cache store, which prevents multiple cache instances writing the same modification individually. The default for this attribute is false . The preload attribute ensures that the cache store data is pre-loaded into memory and is immediately accessible after starting up. The disadvantage of setting this to true is that the start up time increases. The default value for this attribute is false . The ignoreModifications attribute prevents cache modification operations such as put, remove, clear, store, etc. from affecting the cache store. As a result, the cache store can become out of sync with the cache. The default value for this attribute is false . The purgeOnStartup attribute ensures that the cache store is purged during the start up process. The default value for this attribute is false . The tcpNoDelay attribute triggers the TCP NODELAY stack. The default value for this attribute is true . The pingOnStartup attribute sends a ping request to a back end server to fetch the cluster topology. The default value for this attribute is true . The keySizeEstimate attribute provides an estimation of the key size. The default value for this attribute is 64 . The valueSizeEstimate attribute specifies the size of the byte buffers when serializing and deserializing values. The default value for this attribute is 512 . The forceReturnValues attribute sets whether FORCE_RETURN_VALUE is enabled for all calls. The default value for this attribute is false . The servers and server Elements Create a servers element within the remoteStore element to set up the server information for multiple servers. Add a server element within the general servers element to add the information for a single server. The host attribute configures the host address. The port attribute configures the port used by the Remote Cache Store. The connectionPool Element The maxActive attribute indicates the maximum number of active connections for each server at a time. The default value for this attribute is -1 which indicates an infinite number of active connections. The maxIdle attribute indicates the maximum number of idle connections for each server at a time. The default value for this attribute is -1 which indicates an infinite number of idle connections. The maxTotal attribute indicates the maximum number of persistent connections within the combined set of servers. The default setting for this attribute is -1 which indicates an infinite number of connections. The connectionUrl parameter specifies the JDBC driver-specific connection URL. The username parameter contains the username used to connect via the connectionUrl . The driverClass parameter specifies the class name of the driver used to connect to the database. The leveldbStore Element The location parameter specifies the location to store the primary cache store. The directory is automatically created if it does not exist. The expiredLocation parameter specifies the location for expired data using. The directory stores expired data before it is purged. The directory is automatically created if it does not exist. The shared parameter specifies whether the cache store is shared. The only supported value for this parameter in the LevelDB cache store is false . The preload parameter specifies whether the cache store will be pre-loaded. Valid values are true and false . The jpaStore Element The persistenceUnitName attribute specifies the name of the JPA cache store. The entityClassName attribute specifies the fully qualified class name of the JPA entity used to store the cache entry value. The batchSize (optional) attribute specifies the batch size for cache store streaming. The default value for this attribute is 100 . The storeMetadata (optional) attribute specifies whether the cache store keeps the metadata (for example expiration and versioning information) with the entries. The default value for this attribute is true . The binaryKeyedJdbcStore, stringKeyedJdbcStore, and mixedKeyedJdbcStore Elements The fetchPersistentState parameter determines whether the persistent state is fetched when joining a cluster. Set this to true if using a replication and invalidation in a clustered environment. Additionally, if multiple cache stores are chained, only one cache store can have this property enabled. If a shared cache store is used, the cache does not allow a persistent state transfer despite this property being set to true . The fetchPersistentState parameter is false by default. The ignoreModifications parameter determines whether operations that modify the cache (e.g. put, remove, clear, store, etc.) do not affect the cache store. As a result, the cache store can become out of sync with the cache. The purgeOnStartup parameter specifies whether the cache store is purged when initially started. The key2StringMapper parameter specifies the class name of the Key2StringMapper used to map keys to strings for the database tables. The binaryKeyedTable and stringKeyedTable Elements The dropOnExit parameter specifies whether the database tables are dropped upon shutdown. The createOnStart parameter specifies whether the database tables are created by the store on startup. The prefix parameter defines the string prepended to name of the target cache when composing the name of the cache bucket table. The idColumn, dataColumn, and timestampColumn Elements The name parameter specifies the name of the column used. The type parameter specifies the type of the column used. The store Element The class parameter specifies the class name of the cache store implementation. The preload parameter specifies whether to load entries into the cache during start up. Valid values for this parameter are true and false . The shared parameter specifies whether the cache store is shared. This is used when multiple cache instances share a cache store. Valid values for this parameter are true and false . The property Element The name parameter specifies the name of the property. The value parameter specifies the value of the property. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/Cache_Store_Configuration_Details_Library_Mode |
Chapter 3. Node Feature Discovery Operator | Chapter 3. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 3.1. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 3.1.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file. Set cluster-monitoring to "true" . apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 3.1.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 3.2. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options. As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI ( oc ) or the web console. 3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Note The spec.operand.image setting requires a -rhel9 image to be defined for use with OpenShift Container Platform releases 4.13 and later. The following example shows the use of -rhel9 to acquire the correct image. Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.13 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check that the NodeFeatureDiscovery CR was created by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. You have access to a mirror registry with the required images. You installed the skopeo CLI tool. Procedure Determine the digest of the registry image: Run the following command: USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version> Example command USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 Inspect the output to identify the image digest: Example output { ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... } Use the skopeo CLI tool to copy the image from registry.redhat.io to your mirror registry, by running the following command: skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> Example command skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check the status of the NodeFeatureDiscovery CR by running the following command: USD oc get nodefeaturediscovery nfd-instance -o yaml Check that the pods are running without ImagePullBackOff errors by running the following command: USD oc get pods -n <nfd_namespace> 3.2.3. Creating a NodeFeatureDiscovery CR by using the web console As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Navigate to the Operators Installed Operators page. In the Node Feature Discovery section, under Provided APIs , click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . 3.3. Configuring the Node Feature Discovery Operator 3.3.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 3.3.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 3.4. About the NodeFeatureRule custom resource NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices. NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features. 3.5. Using the NodeFeatureRule custom resource Create a NodeFeatureRule object to label nodes if a set of rules match the conditions. Procedure Create a custom resource file named nodefeaturerule.yaml that contains the following text: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: "example rule" labels: "example-custom-feature": "true" # Label is created if all of the rules below match matchFeatures: # Match if "veth" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: ["8086"]} This custom resource specifies that labelling occurs when the veth module is loaded and any PCI device with vendor code 8086 exists in the cluster. Apply the nodefeaturerule.yaml file to your cluster by running the following command: USD oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml The example applies the feature label on nodes with the veth module loaded and any PCI device with vendor code 8086 exists. Note A relabeling delay of up to 1 minute might occur. 3.6. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 3.6.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 3.6.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 3.6.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.13 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator |
Chapter 9. Virtual machines | Chapter 9. Virtual machines 9.1. Creating virtual machines Use one of these procedures to create a virtual machine: Quick Start guided tour Quick create from the Catalog Pasting a pre-configured YAML file with the virtual machine wizard Using the CLI Warning Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines. Templates without a boot source are labeled as Boot source required . You can use these templates if you complete the steps for adding a boot source to the virtual machine . Important Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles. 9.1.1. Using a Quick Start to create a virtual machine The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process. Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine. Prerequisites Access to the website where you can download the URL link for the operating system image. Procedure In the web console, select Quick Starts from the Help menu. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine . Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtualization VirtualMachines page displays the virtual machine. 9.1.2. Quick creating a virtual machine You can quickly create a virtual machine (VM) by using a template with an available boot source. Procedure Click Virtualization Catalog in the side menu. Click Boot source available to filter templates with boot sources. Note By default, the template list will show only Default Templates . Click All Items when filtering to see all available templates for your chosen filters. Click a template to view its details. Click Quick Create VirtualMachine to create a VM from the template. The virtual machine Details page is displayed with the provisioning status. Verification Click Events to view a stream of events as the VM is provisioned. Click Console to verify that the VM booted successfully. 9.1.3. Creating a virtual machine from a customized template Some templates require additional parameters, for example, a PVC with a boot source. You can customize select parameters of a template to create a virtual machine (VM). Procedure In the web console, select a template: Click Virtualization Catalog in the side menu. Optional: Filter the templates by project, keyword, operating system, or workload profile. Click the template that you want to customize. Click Customize VirtualMachine . Specify parameters for your VM, including its Name and Disk source . You can optionally specify a data source to clone. Verification Click Events to view a stream of events as the VM is provisioned. Click Console to verify that the VM booted successfully. Refer to the virtual machine fields section when creating a VM from the web console. 9.1.3.1. Virtual machine fields The following table lists the virtual machine fields that you can edit in the OpenShift Container Platform web console: Table 9.1. Virtual machine fields Tab Fields or functionality Overview Description CPU/Memory Boot mode GPU devices Host devices YAML View, edit, or download the custom resource. Scheduling Node selector Tolerations Affinity rules Dedicated resources Eviction strategy Descheduler setting Environment Add, edit, or delete a config map, secret, or service account. Network Interfaces Add, edit, or delete a network interface. Disks Add, edit, or delete a disk. Scripts cloud-init settings Authorized SSH key Sysprep answer files Metadata Labels Annotations 9.1.3.1.1. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 9.1.3.2. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 9.1.3.3. Cloud-init fields Name Description Hostname Sets a specific hostname for the virtual machine. Authorized SSH Keys The user's public key that is copied to ~/.ssh/authorized_keys on the virtual machine. Custom script Replaces other options with a field in which you paste a custom cloud-init script. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 9.1.3.4. Pasting in a pre-configured YAML file to create a virtual machine Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen. If your YAML configuration is invalid when you click Create , an error message indicates the parameter in which the error occurs. Only one error is shown at a time. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Virtualization VirtualMachines from the side menu. Click Create and select With YAML . Write or paste your virtual machine configuration in the editable window. Alternatively, use the example virtual machine provided by default in the YAML screen. Optional: Click Download to download the YAML configuration file in its present state. Click Create to create the virtual machine. The virtual machine is listed on the VirtualMachines page. 9.1.4. Using the CLI to create a virtual machine You can create a virtual machine from a virtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM: Example 9.1. Example manifest for a RHEL VM 1 Specify the name of the virtual machine. 2 Specify the password for cloud-user. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> 9.1.5. Virtual machine storage volume types Storage volume type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 9.1.6. About RunStrategies for virtual machines A RunStrategy for virtual machines determines a virtual machine instance's (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used. There are four defined RunStrategies. Always A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true . RerunOnFailure A VMI is re-created if the instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down. Manual The start , stop , and restart virtctl client commands can be used to control the VMI's state and existence. Halted No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false . Different combinations of the start , stop and restart virtctl commands affect which RunStrategy is used. The following table follows a VM's transition from different states. The first column shows the VM's initial RunStrategy . Each additional column shows a virtctl command and the new RunStrategy after that command is run. Initial RunStrategy start stop restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node. apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template: ... 1 The VMI's current RunStrategy setting. 9.1.7. Additional resources The VirtualMachineSpec definition in the KubeVirt v0.53.2 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification. Note The KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization. Enable the CPU Manager to use the high-performance workload profile. See Prepare a container disk before adding it to a virtual machine as a containerDisk volume. See Deploying machine health checks for further details on deploying and enabling machine health checks. See Installer-provisioned infrastructure overview for further details on installer-provisioned infrastructure. Customizing the storage profile 9.2. Editing virtual machines You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen. 9.2.1. Editing a virtual machine in the web console Edit select values of a virtual machine in the web console by clicking the pencil icon to the relevant field. Other values can be edited using the CLI. You can edit labels and annotations for any templates, including those provided by Red Hat. Other fields are editable for user-customized templates only. Procedure Click Virtualization VirtualMachines from the side menu. Optional: Use the Filter drop-down menu to sort the list of virtual machines by attributes such as status, template, node, or operating system (OS). Select a virtual machine to open the VirtualMachine details page. Click any field that has the pencil icon, which indicates that the field is editable. For example, click the current Boot mode setting, such as BIOS or UEFI, to open the Boot mode window and select an option from the list. Make the relevant changes and click Save . Note If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 9.2.1.1. Virtual machine fields The following table lists the virtual machine fields that you can edit in the OpenShift Container Platform web console: Table 9.2. Virtual machine fields Tab Fields or functionality Details Labels Annotations Description CPU/Memory Boot mode Boot order GPU devices Host devices SSH access YAML View, edit, or download the custom resource. Scheduling Node selector Tolerations Affinity rules Dedicated resources Eviction strategy Descheduler setting Network Interfaces Add, edit, or delete a network interface. Disks Add, edit, or delete a disk. Scripts cloud-init settings Snapshots Add, restore, or delete a virtual machine snapshot. 9.2.2. Editing a virtual machine YAML configuration using the web console You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine. Click the YAML tab to display the editable configuration. Optional: You can click Download to download the YAML file locally in its current state. Edit the file and click Save . A confirmation message shows that the modification has been successful and includes the updated version number for the object. 9.2.3. Editing a virtual machine YAML configuration using the CLI Use this procedure to edit a virtual machine YAML configuration using the CLI. Prerequisites You configured a virtual machine with a YAML object configuration file. You installed the oc CLI. Procedure Run the following command to update the virtual machine configuration: USD oc edit <object_type> <object_ID> Open the object configuration. Edit the YAML. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply <object_type> <object_ID> 9.2.4. Adding a virtual disk to a virtual machine Use this procedure to add a virtual disk to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Disks tab and then click Add disk . In the Add disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 9.2.4.1. Editing CD-ROMs for VirtualMachines Use the following procedure to edit CD-ROMs for virtual machines. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 9.2.4.2. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 9.2.5. Adding a network interface to a virtual machine Use this procedure to add a network interface to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . Note If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 9.2.5.1. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 9.2.6. Additional resources Customizing the storage profile 9.3. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 9.3.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 9.3.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 9.3.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm example Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console. 9.3.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 9.4. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 9.4.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu of the virtual machine that you want to delete and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . In the confirmation pop-up window, click Delete to permanently delete the virtual machine. 9.4.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace. 9.5. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 9.5.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. 9.5.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 9.5.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 9.5.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 9.5.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 9.5.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 9.6. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 9.6.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions . Select Restart . In the confirmation window, click Start to start the virtual machine. Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 9.6.2. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . In the confirmation window, click Restart to restart the virtual machine. 9.6.3. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . In the confirmation window, click Stop to stop the virtual machine. 9.6.4. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Note You can pause virtual machines by using the virtctl client. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: In the Status column, click Paused . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click the pencil icon that is located on the right side of Status . In the confirmation window, click Unpause to unpause the virtual machine. 9.7. Accessing virtual machine consoles OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 9.7.1. Accessing virtual machine consoles in the OpenShift Container Platform web console You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console. You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console. 9.7.1.1. Connecting to the serial console Connect to the serial console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Console tab. The VNC console opens by default. Click Disconnect to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background. Click the VNC Console drop-down list and select Serial Console . Click Disconnect to end the console session. Optional: Open the serial console in a separate window by clicking Open Console in New Window . 9.7.1.2. Connecting to the VNC console Connect to the VNC console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Console tab. The VNC console opens by default. Optional: Open the VNC console in a separate window by clicking Open Console in New Window . Optional: Send key combinations to the virtual machine by clicking Send Key . Click outside the console window and then click Disconnect to end the session. 9.7.1.3. Connecting to a Windows virtual machine with RDP The Desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines. To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Console tab on the VirtualMachine details page of the web console and supply it to your preferred RDP client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers. An RDP client installed on a machine on the same network as the Windows virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click a Windows virtual machine to open the VirtualMachine details page. Click the Console tab. From the list of consoles, select Desktop viewer . Click Launch Remote Desktop to download the console.rdp file. Reference the console.rdp file in your preferred RDP client to connect to the Windows virtual machine. 9.7.1.4. Switching between virtual machine displays If your Windows virtual machine (VM) has a vGPU attached, you can switch between the default display and the vGPU display by using the web console. Prerequisites The mediated device is configured in the HyperConverged custom resource and assigned to the VM. The VM is running. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines Select a Windows virtual machine to open the Overview screen. Click the Console tab. From the list of consoles, select VNC console . Choose the appropriate key combination from the Send Key list: To access the default VM display, select Ctl + Alt+ 1 . To access the vGPU display, select Ctl + Alt + 2 . Additional resources Configuring mediated devices 9.7.2. Accessing virtual machine consoles by using CLI commands 9.7.2.1. Accessing a virtual machine via SSH by using virtctl You can use the virtctl ssh command to forward SSH traffic to a virtual machine (VM). Note Heavy SSH traffic on the control plane can slow down the API server. If you regularly need a large number of connections, use a dedicated Kubernetes Service object to access the virtual machine. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed the virtctl client. The virtual machine you want to access is running. You are in the same project as the VM. Procedure Use the ssh-keygen command to generate an SSH public key pair: USD ssh-keygen -f <key_file> 1 1 Specify the file in which to store the keys. Create an SSH authentication secret which contains the SSH public key to access the VM: USD oc create secret generic my-pub-key --from-file=key1=<key_file>.pub Add a reference to the secret in the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm spec: running: true template: spec: accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key 1 propagationMethod: configDrive: {} 2 # ... 1 Reference to the SSH authentication Secret object. 2 The SSH public key is injected into the VM as cloud-init metadata using the configDrive provider. Restart the VM to apply your changes. Run the following command to access the VM via SSH: USD virtctl ssh -i <key_file> <vm_username>@<vm_name> Optional: To securely transfer files to or from the VM, use the following commands: Copy a file from your machine to the VM USD virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>: Copy a file from the VM to your machine USD virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> . Additional resources Creating a service to expose a virtual machine Understanding secrets 9.7.2.2. Accessing the serial console of a virtual machine instance The virtctl console command opens a serial console to the specified virtual machine instance. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Procedure Connect to the serial console with virtctl : USD virtctl console <VMI> 9.7.2.3. Accessing the graphical console of a virtual machine instances with VNC The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Note If you use virtctl via SSH on a remote machine, you must forward the X session to your machine. Procedure Connect to the graphical interface with the virtctl utility: USD virtctl vnc <VMI> If the command failed, try using the -v flag to collect troubleshooting information: USD virtctl vnc <VMI> -v 4 9.7.2.4. Connecting to a Windows virtual machine with an RDP console Create a Kubernetes Service object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent object is included in the VirtIO drivers. An RDP client installed on your local machine. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add the label special: key in the spec.template.metadata.labels section. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: rdpservice 1 namespace: example-namespace 2 spec: ports: - targetPort: 3389 3 protocol: TCP selector: special: key 4 type: NodePort 5 # ... 1 The name of the Service object. 2 The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest. 3 The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. 4 The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest. 5 The type of service. Save the Service manifest file. Create the service by running the following command: USD oc create -f <service_name>.yaml Start the VM. If the VM is already running, restart it. Query the Service object to verify that it is available: USD oc get service -n example-namespace Example output for NodePort service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m Run the following command to obtain the IP address for the node: USD oc get node <node_name> -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.24.0 192.168.55.101 <none> Specify the node IP address and the assigned port in your preferred RDP client. Enter the user name and password to connect to the Windows virtual machine. 9.8. Automating Windows installation with sysprep You can use Microsoft DVD images and sysprep to automate the installation, setup, and software provisioning of Windows virtual machines. 9.8.1. Using a Windows DVD to create a VM disk image Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines. Procedure In the OpenShift Virtualization web console, click Storage PersistentVolumeClaims Create PersistentVolumeClaim With Data upload form . Select the intended project. Set the Persistent Volume Claim Name . Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM. 9.8.2. Using a disk image to install Windows You can use a disk image to install Windows on your virtual machine. Prerequisites You must create a disk image using a Windows DVD. You must create an autounattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog from the side menu. Select a Windows template and click Customize VirtualMachine . Select Upload (Upload a new file to a PVC) from the Disk source list and browse to the DVD image. Click Review and create VirtualMachine . Clear Clone available operating system source to this Virtual Machine . Clear Start this VirtualMachine after creation . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . The VM will start with the sysprep disk containing the autounattend.xml answer file. 9.8.3. Generalizing a Windows VM using sysprep Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click the Disks tab. Click the Options menu for the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 9.8.4. Specializing a Windows virtual machine Specializing a virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Specify the Persistent Volume Claim project and Persistent Volume Claim name of the generalized Windows image. Click Review and create VirtualMachine . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. 9.8.5. Additional resources Creating virtual machines Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 9.9. Triggering virtual machine failover by resolving a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. Note If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks: Failed nodes are automatically recycled. Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes. 9.9.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has RunStrategy set to Always . You have installed the OpenShift CLI ( oc ). 9.9.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 9.9.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 9.9.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 9.10. Installing the QEMU guest agent on virtual machines The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks. 9.10.1. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent 9.10.2. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 9.10.2.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 9.10.2.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 9.11. Viewing the QEMU guest agent information for virtual machines When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks. 9.11.1. Prerequisites Install the QEMU guest agent on the virtual machine. 9.11.2. About the QEMU guest agent information in the web console When the QEMU guest agent is installed, the Overview and Details tabs on the VirtualMachine details page displays information about the hostname, operating system, time zone, and logged in users. The VirtualMachine details page shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems. Note If the QEMU guest agent is not installed, the Overview and the Details tabs display information about the operating system that was specified when the virtual machine was created. 9.11.3. Viewing the QEMU guest agent information in the web console You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine name to open the VirtualMachine details page. Click the Details tab to view active users. Click the Disks tab to view information about the file systems. 9.12. Managing config maps, secrets, and service accounts in virtual machines You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can: Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine. Store non-confidential configuration data in a config map so that a pod or another object can consume the data. Allow a component to access the API server by associating a service account with that component. Note OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead. 9.12.1. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. In the Environment tab, click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click the Disks tab and verify that the secret, config map, or service account is included in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. 9.12.2. Removing a secret, config map, or service account from a virtual machine Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console. Prerequisites You must have at least one secret, config map, or service account that is attached to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Environment tab. Find the item that you want to delete in the list, and click Remove on the right side of the item. Click Save . Note You can reset the form to the last saved state by clicking Reload . Verification On the VirtualMachine details page, click the Disks tab. Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks. 9.12.3. Additional resources Providing sensitive data to pods Understanding and creating service accounts Understanding config maps 9.13. Installing VirtIO driver on an existing Windows virtual machine 9.13.1. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing Virtio drivers on a new Windows virtual machine . 9.13.2. Supported VirtIO drivers for Microsoft Windows virtual machines Table 9.3. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 9.13.3. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 9.13.4. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 9.13.5. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 9.14. Installing VirtIO driver on a new Windows virtual machine 9.14.1. Prerequisites Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine. 9.14.2. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing VirtIO driver on an existing Windows virtual machine . 9.14.3. Supported VirtIO drivers for Microsoft Windows virtual machines Table 9.4. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 9.14.4. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 9.14.5. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 9.14.6. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 9.15. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. 9.15.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. vTPM devices also protect virtual machines by temporarily storing secrets without physical hardware. However, using vTPM for persistent secret storage is not currently supported. vTPM discards stored secrets after a VM shuts down. 9.15.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also temporarily stores secrets for that VM. Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> Edit the VM spec so that it includes the tpm: {} line. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: {} 1 ... 1 Adds the TPM device to the VM. To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 9.16. Advanced virtual machine management 9.16.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 9.16.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 9.16.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 9.16.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 9.16.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. 9.16.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 9.16.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 ... 9.16.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 9.16.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 9.16.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" ... 9.16.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 9.16.3. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 9.16.3.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 9.16.3.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 9.16.4. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 9.16.4.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 9.16.4.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 9.16.5. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 9.16.5.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 9.16.5.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1", "vlan": 1 1 }, { "type": "cnv-tuning" 2 } ] }' 1 Optional: The VLAN tag. 2 The cnv-tuning plugin provides support for custom MAC addresses. Note The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN. Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 9.16.5.3. OpenShift Virtualization networking glossary OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Preboot eXecution Environment (PXE) an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client. 9.16.6. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 9.16.6.1. Prerequisites Nodes must have pre-allocated huge pages configured . 9.16.6.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 9.16.6.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 9.16.7. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 9.16.7.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 9.16.7.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 9.16.7.3. Enabling dedicated resources for a virtual machine You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. On the Scheduling tab, click the pencil icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 9.16.8. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 9.16.8.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 9.16.8.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 9.16.8.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 9.16.8.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 9.16.9. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 9.16.9.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 9.16.9.1.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites Administrative privilege to a working OpenShift Container Platform cluster. Intel or AMD CPU hardware. Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 9.16.9.1.2. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.11.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 9.16.9.1.3. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 9.16.9.1.4. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 9.16.9.2. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 9.16.9.2.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 9.16.9.3. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Post-installation machine configuration tasks 9.16.10. Configuring vGPU passthrough Your virtual machines can access a virtual GPU (vGPU) hardware. Assigning a vGPU to your virtual machine allows you do the following: Access a fraction of the underlying hardware's GPU to achieve high performance benefits in your virtual machine. Streamline resource-intensive I/O operations. Important vGPU passthrough can only be assigned to devices that are connected to clusters running in a bare metal environment. 9.16.10.1. Assigning vGPU passthrough devices to a virtual machine Use the OpenShift Container Platform web console to assign vGPU passthrough devices to your virtual machine. Prerequisites The virtual machine must be stopped. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Select the virtual machine to which you want to assign the device. On the Details tab, click GPU devices . If you add a vGPU device as a host device, you cannot access the device with the VNC console. Click Add GPU device , enter the Name and select the device from the Device name list. Click Save . Click the YAML tab to verify that the new devices have been added to your cluster configuration in the hostDevices section. Note You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems, such as Windows 10 or RHEL 7. To display resources that are connected to your cluster, click Compute Hardware Devices from the side menu. 9.16.10.2. Additional resources Creating virtual machines Creating virtual machine templates 9.16.11. Configuring mediated devices OpenShift Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged custom resource (CR). Important Declarative configuration of mediated devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.16.11.1. About using the NVIDIA GPU Operator The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks related to bootstrapping GPU nodes. Since the GPU is a special resource in the cluster, you must install some components before deploying application workloads onto the GPU. These components include the NVIDIA drivers which enables compute unified device architecture (CUDA), Kubernetes device plugin, container runtime and others such as automatic node labelling, monitoring and more. Note The NVIDIA GPU Operator is supported only by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . There are two ways to enable GPUs with OpenShift Container Platform OpenShift Virtualization: the OpenShift Container Platform-native way described here and by using the NVIDIA GPU Operator. The NVIDIA GPU Operator is a Kubernetes Operator that enables OpenShift Container Platform OpenShift Virtualization to expose GPUs to virtualized workloads running on OpenShift Container Platform. It allows users to easily provision and manage GPU-enabled virtual machines, providing them with the ability to run complex artificial intelligence/machine learning (AI/ML) workloads on the same platform as their other workloads. It also provides an easy way to scale the GPU capacity of their infrastructure, allowing for rapid growth of GPU-based workloads. For more information about using the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated VMs, see NVIDIA GPU Operator with OpenShift Virtualization . 9.16.11.2. About using virtual GPUs with OpenShift Virtualization Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters. Note Refer to your hardware vendor's documentation for functionality and support details. Mediated device A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests. 9.16.11.2.1. Prerequisites If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices. If you use NVIDIA cards, you installed the NVIDIA GRID driver . 9.16.11.2.2. Configuration overview When configuring mediated devices, an administrator must complete the following tasks: Create the mediated devices. Expose the mediated devices to the cluster. The HyperConverged CR includes APIs that accomplish both tasks. Creating mediated devices ... spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDevicesTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> ... 1 Required: Configures global settings for the cluster. 2 Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes configuration. 3 Required if you use nodeMediatedDeviceTypes . Overrides the global mediatedDevicesTypes configuration for the specified nodes. 4 Required if you use nodeMediatedDeviceTypes . Must include a key:value pair. Exposing mediated devices to the cluster ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 ... 1 Exposes the mediated devices that map to this value on the host. Note You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name , substituting the correct values for your system. For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q . Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type. 2 The resourceName should match that allocated on the node. Find the resourceName by using the following command: USD oc get USDNODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))' 9.16.11.2.3. How vGPUs are assigned to nodes For each physical device, OpenShift Virtualization configures the following values: A single mdev type. The maximum number of instances of the selected mdev type. The cluster architecture affects how devices are created and assigned to nodes. Large cluster with multiple cards per node On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example: ... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 ... In this scenario, each node has two cards, both of which support the following vGPU types: nvidia-105 ... nvidia-108 nvidia-217 nvidia-299 ... On each node, OpenShift Virtualization creates the following vGPUs: 16 vGPUs of type nvidia-105 on the first card. 2 vGPUs of type nvidia-108 on the second card. One node has a single card that supports more than one requested vGPU type OpenShift Virtualization uses the supported type that comes first on the mediatedDevicesTypes list. For example, the card on a node card supports nvidia-223 and nvidia-224 . The following mediatedDevicesTypes list is configured: ... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224 ... In this example, OpenShift Virtualization uses the nvidia-223 type. 9.16.11.2.4. About changing and removing mediated devices The cluster's mediated device configuration can be updated with OpenShift Virtualization by: Editing the HyperConverged CR and change the contents of the mediatedDevicesTypes stanza. Changing the node labels that match the nodeMediatedDeviceTypes node selector. Removing the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Note If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. Depending on the specific changes, these actions cause OpenShift Virtualization to reconfigure mediated devices or remove them from the cluster nodes. 9.16.11.2.5. Preparing hosts for mediated devices You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices. 9.16.11.2.5.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites Administrative privilege to a working OpenShift Container Platform cluster. Intel or AMD CPU hardware. Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 9.16.11.2.6. Adding and removing mediated devices You can add or remove mediated devices. 9.16.11.2.6.1. Creating and exposing mediated devices You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged custom resource (CR). Prerequisites You enabled the IOMMU (Input-Output Memory Management Unit) driver. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the mediated device information to the HyperConverged CR spec , ensuring that you include the mediatedDevicesConfiguration and permittedHostDevices stanzas. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q ... <.> Creates mediated devices. <.> Required: Global mediatedDevicesTypes configuration. <.> Optional: Overrides the global configuration for specific nodes. <.> Required if you use nodeMediatedDeviceTypes . <.> Exposes mediated devices to the cluster. Save your changes and exit the editor. Verification You can verify that a device was added to a specific node by running the following command: USD oc describe node <node_name> 9.16.11.2.6.2. Removing mediated devices from the cluster using the CLI To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q 1 To remove the nvidia-231 device type, delete it from the mediatedDevicesTypes array. 2 To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field. Save your changes and exit the editor. 9.16.11.3. Using mediated devices A vGPU is a type of mediated device; the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines. 9.16.11.3.1. Assigning a mediated device to a virtual machine Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines. Prerequisites The mediated device is configured in the HyperConverged custom resource. Procedure Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest: Example virtual machine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2 1 The resource name associated with the mediated device. 2 A name to identify the device on the VM. Verification To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest: USD lspci -nnk | grep <device_name> 9.16.11.4. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS 9.16.12. Configuring a watchdog Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service. 9.16.12.1. Prerequisites The virtual machine must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb . 9.16.12.2. Defining a watchdog device Define how the watchdog proceeds when the operating system (OS) no longer responds. Table 9.5. Available actions poweroff The virtual machine (VM) powers down immediately. If spec.running is set to true , or spec.runStrategy is not set to manual , then the VM reboots. reset The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it. shutdown The VM gracefully powers down by stopping all services. Procedure Create a YAML file with the following contents: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 ... 1 Specify the watchdog action ( poweroff , reset , or shutdown ). The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog . This device can now be used by the watchdog binary. Apply the YAML file to your cluster by running the following command: USD oc apply -f <file_name>.yaml Important This procedure is provided for testing watchdog functionality only and must not be run on production machines. Run the following command to verify that the VM is connected to the watchdog device: USD lspci | grep watchdog -i Run one of the following commands to confirm the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Terminate the watchdog service: # pkill -9 watchdog 9.16.12.3. Installing a watchdog device Install the watchdog package on your virtual machine and start the watchdog service. Procedure As a root user, install the watchdog package and dependencies: # yum install watchdog Uncomment the following line in the /etc/watchdog.conf file, and save the changes: #watchdog-device = /dev/watchdog Enable the watchdog service to start on boot: # systemctl enable --now watchdog.service 9.16.12.4. Additional resources Monitoring application health by using health checks 9.16.13. Automatic importing and updating of pre-defined boot sources You can use boot sources that are system-defined and included with OpenShift Virtualization or user-defined , which you create. System-defined boot source imports and updates are controlled by the product feature gate. You can enable, disable, or re-enable updates using the feature gate. User-defined boot sources are not controlled by the product feature gate and must be individually managed to opt in or opt out of automatic imports and updates. Important As of version 4.10, OpenShift Virtualization automatically imports and updates boot sources, unless you manually opt out or do not set a default storage class. If you upgrade to version 4.10, you must manually enable automatic imports and updates for boot sources from version 4.9 or earlier. 9.16.13.1. Enabling automatic boot source updates If you have boot sources from OpenShift Virtualization 4.9 or earlier, you must manually turn on automatic updates for these boot sources. All boot sources in OpenShift Virtualization 4.10 and later are automatically updated by default. To enable automatic boot source imports and updates, set the cdi.kubevirt.io/dataImportCron field to true for each boot source you want to update automatically. Procedure To turn on automatic updates for a boot source, use the following command to apply the dataImportCron label to the data source: USD oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true 1 1 Specifying true turns on automatic updates for the rhel8 boot source. 9.16.13.2. Disabling automatic boot source updates Disabling automatic boot source imports and updates can be helpful to reduce the number of logs in disconnected environments or to reduce resource usage. To disable automatic boot source imports and updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to false . Note User-defined boot sources are not affected by this setting. Procedure Use the following command to disable automatic boot source updates: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]' 9.16.13.3. Re-enabling automatic boot source updates If you have previously disabled automatic boot source updates, you must manually re-enable the feature. Set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to true . Procedure Use the following command to re-enable automatic updates: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": true}]' 9.16.13.4. Configuring a storage class for user-defined boot source updates You can configure a storage class that allows automatic importing and updating for user-defined boot sources. Procedure Define a new storageClassName by editing the HyperConverged custom resource (CR). apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <appropriate_class_name> ... Set the new default storage class by running the following commands: USD oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' USD oc patch storageclass <appropriate_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 9.16.13.5. Enabling automatic updates for user-defined boot sources OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update user-defined boot sources. You must manually enable automatic imports and updates on a user-defined boot sources by editing the HyperConverged custom resource (CR). Procedure Use the following command to open the HyperConverged CR for editing: USD oc edit -n openshift-cnv HyperConverged Edit the HyperConverged CR, adding the appropriate template and boot source in the dataImportCronTemplates section. For example: Example in CentOS 7 apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: "None" 5 1 This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer . 2 Schedule for the job specified in cron format. 3 Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod , which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image , but the CDI importer is not authorized to access it. 4 For the custom image to be detected as an available boot source, the name of the image's managedDataSource must match the name of the template's DataSource , which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file. 5 Use All to retain data volumes and data sources when the cron job is deleted. Use None to delete data volumes and data sources when the cron job is deleted. 9.16.13.6. Disabling an automatic update for a system-defined or user-defined boot source You can disable automatic imports and updates for a user-defined boot source and for a system-defined boot source. Because system-defined boot sources are not listed by default in the spec.dataImportCronTemplates of the HyperConverged custom resource (CR), you must add the boot source and disable auto imports and updates. Procedure To disable automatic imports and updates for a user-defined boot source, remove the boot source from the spec.dataImportCronTemplates field in the custom resource list. To disable automatic imports and updates for a system-defined boot source: Edit the HyperConverged CR and add the boot source to spec.dataImportCronTemplates . Disable automatic imports and updates by setting the dataimportcrontemplate.kubevirt.io/enable annotation to false . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: false name: rhel8-image-cron ... 9.16.13.7. Verifying the status of a boot source You can verify whether a boot source is system-defined or user-defined. The status section of each boot source listed in the status.dataImportChronTemplates field of the HyperConverged CR indicates the type of boot source. For example, commonTemplate: true indicates a system-defined ( commonTemplate ) boot source and status: {} indicates a user-defined boot source. Procedure Use the oc get command to list the dataImportChronTemplates in the HyperConverged CR. Verify the status of the boot source. Example output ... apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged ... spec: ... status: 1 ... dataImportCronTemplates: 2 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 3 ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 4 ... 1 The status field for the HyperConverged CR. 2 The dataImportCronTemplates field, which lists all defined boot sources. 3 Indicates a system-defined boot source. 4 Indicates a user-defined boot source. 9.16.14. Enabling descheduler evictions on virtual machines You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node. Important Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.16.14.1. Descheduler profiles Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load. DevPreviewLongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). 9.16.14.2. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section and select DevPreviewLongLifecycle . The AffinityAndTaints profile is enabled by default. Important The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). 9.16.14.3. Enabling descheduler evictions on a virtual machine (VM) After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR). Prerequisites Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ( oc ). Ensure that the VM is not running. Procedure Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR: apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true" If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1 1 By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . The descheduler is now enabled on the VM. 9.16.14.4. Additional resources Evicting pods using the descheduler 9.17. Importing virtual machines 9.17.1. TLS certificates for data volume imports 9.17.1.1. Adding TLS certificates for authenticating data volume imports TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume. Create the config map by referencing the relative file path for the TLS certificate. Procedure Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace. USD oc get ns Create the config map: USD oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem> 9.17.1.2. Example: Config map created from a TLS certificate The following example is of a config map created from ca.pem TLS certificate. apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE----- 9.17.2. Importing virtual machine images with data volumes Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage. The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry. Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details. 9.17.2.1. Prerequisites If the endpoint requires a TLS certificate, the certificate must be included in a config map in the same namespace as the data volume and referenced in the data volume configuration. To import a container disk: You might need to prepare a container disk from a virtual machine image and store it in your container registry before importing it. If the container registry does not have TLS, you must add the registry to the insecureRegistries field of the HyperConverged custom resource before you can import a container disk from it. You might need to define a storage class or prepare CDI scratch space for this operation to complete successfully. 9.17.2.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 9.17.2.3. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.17.2.4. Importing a virtual machine image into storage by using a data volume You can import a virtual machine image into storage by using a data volume. The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry. You specify the data source for the image in a VirtualMachine configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage. Prerequisites To import a virtual machine image you must have the following: A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source. If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Edit the VirtualMachine manifest, specifying the data source for the virtual machine image you want to import, and save it as vm-fedora-datavolume.yaml : 1 Specify the name of the virtual machine. 2 Specify the name of the data volume. 3 Specify http for an HTTP or HTTPS endpoint. Specify registry for a container disk image imported from a registry. 4 Specify the URL or registry endpoint of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" . 5 Specify the Secret name if you created a Secret for the data source. 6 Optional: Specify a CA certificate config map. Create the virtual machine: USD oc create -f vm-fedora-datavolume.yaml Note The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the virtual machine. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the virtual machine has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 9.17.2.5. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.17.3. Importing virtual machine images into block storage with data volumes You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC). Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details. 9.17.3.1. Prerequisites If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 9.17.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.17.3.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 9.17.3.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 9.17.3.5. Importing a virtual machine image into block storage by using a data volume You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine manifest before you create a virtual machine. Prerequisites A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Create a DataVolume manifest, specifying the data source for the virtual machine image and Block for storage.volumeMode . apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi 1 Specify the name of the data volume. 2 Optional: Set the storage class or omit it to accept the cluster default. 3 Specify the HTTP or HTTPS URL of the image to import. 4 Specify the Secret name if you created a Secret for the data source. 5 The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify Block . Create the data volume to import the virtual machine image: USD oc create -f import-pv-datavolume.yaml You can reference this data volume in a VirtualMachine manifest before you create a virtual machine. 9.17.3.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 9.17.3.7. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.18. Cloning virtual machines 9.18.1. Enabling user permissions to clone data volumes across namespaces The isolating nature of namespaces means that users cannot by default clone resources between namespaces. To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace. 9.18.1.1. Prerequisites Only a user with the cluster-admin role can create cluster roles. 9.18.1.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.18.1.3. Creating RBAC resources for cloning data volumes Create a new cluster role that enables permissions for all actions for the datavolumes resource. Procedure Create a ClusterRole manifest: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"] 1 Unique name for the cluster role. Create the cluster role in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the ClusterRole manifest created in the step. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the step. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io 1 Unique name for the role binding. 2 The namespace for the source data volume. 3 The namespace to which the data volume is cloned. 4 The name of the cluster role created in the step. Create the role binding in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the RoleBinding manifest created in the step. 9.18.2. Cloning a virtual machine disk into a new data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 9.18.2.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 9.18.2.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.18.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 9.18.2.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.18.3. Cloning a virtual machine by using a data volume template You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate in your virtual machine configuration file, you create a new data volume from the original PVC. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 9.18.3.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 9.18.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.18.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate in the virtual machine manifest and the source PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine. Note When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a VirtualMachine object. The following virtual machine example clones my-favorite-vm-disk , which is located in the source-namespace namespace. The 2Gi data volume called favorite-clone is created from my-favorite-vm-disk . For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk" 1 The virtual machine to create. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 9.18.3.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.18.4. Cloning a virtual machine disk into a new block storage data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 9.18.4.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 9.18.4.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.18.4.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 9.18.4.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 9.18.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). At least one available block persistent volume (PV) that is the same size as or larger than the source PVC. Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, volumeMode: Block so that an available block PV is used, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 5 Specifies that the destination is a block PV Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 9.18.4.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.19. Virtual machine networking 9.19.1. Configuring the virtual machine for the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode Note Traffic on the virtual Network Interface Cards (vNICs) that are attached to the default pod network is interrupted during live migration. 9.19.1.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP. Procedure Edit the interfaces spec of your virtual machine configuration file: kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 9.19.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 ... interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 9.19.2. Creating a service to expose a virtual machine You can expose a virtual machine within the cluster or outside the cluster by using a Service object. 9.19.2.1. About services A Kubernetes service is an abstract way to expose an application running on a set of pods as a network service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a spec.type in the Service object: ClusterIP Exposes the service on an internal IP address within the cluster. ClusterIP is the default service type . NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a service accessible from outside the cluster. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. 9.19.2.1.1. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 9.19.2.2. Exposing a virtual machine as a service Create a ClusterIP , NodePort , or LoadBalancer service to connect to a running virtual machine (VM) from within or outside the cluster. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add the label special: key in the spec.template.metadata.labels section. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7 1 The name of the Service object. 2 The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest. 3 Optional: Specifies how the nodes distribute service traffic that is received on external IP addresses. This only applies to NodePort and LoadBalancer service types. The default value is Cluster which routes traffic evenly to all cluster endpoints. 4 Optional: When set, the nodePort value must be unique across all services. If not specified, a value in the range above 30000 is dynamically allocated. 5 Optional: The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. If targetPort is not specified, it takes the same value as port . 6 The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest. 7 The type of service. Possible values are ClusterIP , NodePort and LoadBalancer . Save the Service manifest file. Create the service by running the following command: USD oc create -f <service_name>.yaml Start the VM. If the VM is already running, restart it. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace Example output for ClusterIP service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m Example output for NodePort service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m Example output for LoadBalancer service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s Choose the appropriate method to connect to the virtual machine: For a ClusterIP service, connect to the VM from within the cluster by using the service IP address and the service port. For example: USD ssh [email protected] -p 27017 For a NodePort service, connect to the VM by specifying the node IP address and the node port outside the cluster network. For example: USD ssh fedora@USDNODE_IP -p 30000 For a LoadBalancer service, use the vinagre client to connect to your virtual machine by using the public IP address and port. External ports are dynamically allocated. 9.19.2.3. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 9.19.3. Connecting a virtual machine to a Linux bridge network By default, OpenShift Virtualization is installed with a single, internal pod network. You must create a Linux bridge network attachment definition (NAD) in order to connect to additional networks. To attach a virtual machine to an additional network: Create a Linux bridge node network configuration policy. Create a Linux bridge network attachment definition. Configure the virtual machine, enabling the virtual machine to recognize the network attachment definition. For more information about scheduling, interface types, and other node networking activities, see the node networking section. 9.19.3.1. Connecting to the network through the network attachment definition 9.19.3.1.1. Creating a Linux bridge node network configuration policy Use a NodeNetworkConfigurationPolicy manifest YAML file to create the Linux bridge. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 9.19.3.2. Creating a Linux bridge network attachment definition Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. 9.19.3.2.1. Creating a Linux bridge network attachment definition in the web console Network administrators can create network attachment definitions to provide layer-2 networking to pods and virtual machines. Procedure In the web console, click Networking Network Attachment Definitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Click the Network Type list and select CNV Linux bridge . Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. 9.19.3.2.2. Creating a Linux bridge network attachment definition in the CLI As a network administrator, you can configure a network attachment definition of type cnv-bridge to provide layer-2 networking to pods and virtual machines. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Create a network attachment definition in the same namespace as the virtual machine. Add the virtual machine to the network attachment definition, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ "cniVersion": "0.3.1", "name": "<bridge-network>", 3 "type": "cnv-bridge", 4 "bridge": "<bridge-interface>", 5 "macspoofchk": true, 6 "vlan": 1 7 }' 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection, where bridge-interface must match the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the bridge-interface bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. 6 Optional: Flag to enable MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f <network-attachment-definition.yaml> 1 1 Where <network-attachment-definition.yaml> is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition <bridge-network> 9.19.3.3. Configuring the virtual machine for a Linux bridge network 9.19.3.3.1. Creating a NIC for a virtual machine in the web console Create and attach additional NICs to a virtual machine from the web console. Prerequisites A network attachment definition must be available. Procedure In the correct project in the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Network Interfaces tab to view the NICs already attached to the virtual machine. Click Add Network Interface to create a new slot in the list. Select a network attachment definition from the Network list for the additional network. Fill in the Name , Model , Type , and MAC Address for the new NIC. Click Save to save and attach the NIC to the virtual machine. 9.19.3.3.2. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 9.19.3.3.3. Attaching a virtual machine to an additional network in the CLI Attach a virtual machine to an additional network by adding a bridge interface and specifying a network attachment definition in the virtual machine configuration. This procedure uses a YAML file to demonstrate editing the configuration and applying the updated file to the cluster. You can alternatively use the oc edit <object> <name> command to edit an existing virtual machine. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Create or edit a configuration of a virtual machine that you want to connect to the bridge network. Add the bridge interface to the spec.template.spec.domain.devices.interfaces list and the network attachment definition to the spec.template.spec.networks list. This example adds a bridge interface called bridge-net that connects to the a-bridge-network network attachment definition: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 ... networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3 ... 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the default namespace or the same namespace where the VM is to be created. In this case, multus is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Apply the configuration: USD oc apply -f <example-vm.yaml> Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 9.19.4. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configure an SR-IOV network device. Configure an SR-IOV network. Connect the VM to the SR-IOV network. 9.19.4.1. Prerequisites You must have enabled global SR-IOV and VT-d settings in the firmware for the host . You must have installed the SR-IOV Network Operator . 9.19.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 9.19.4.3. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 9.19.4.4. Connecting a virtual machine to an SR-IOV network You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Include the SR-IOV network details in the spec.domain.devices.interfaces and spec.networks of the VM configuration: kind: VirtualMachine ... spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6 ... 1 A unique name for the interface that is connected to the pod network. 2 The masquerade binding to the default pod network. 3 A unique name for the SR-IOV interface. 4 The name of the pod network interface. This must be the same as the interfaces.name that you defined earlier. 5 The name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 6 The name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm-sriov.yaml> 1 1 The name of the virtual machine YAML file. 9.19.5. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 9.19.5.1. Prerequisites You must have installed the Service Mesh Operator and deployed the service mesh control plane . You must have added the namespace where the virtual machine is created to the service mesh member roll . You must use the masquerade binding method for the default pod network. 9.19.5.2. Configuring a virtual machine for the service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Prerequisites To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation. Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 9.19.6. Configuring IP addresses for virtual machines You can configure either dynamically or statically provisioned IP addresses for virtual machines. Prerequisites The virtual machine must connect to an external network . You must have a DHCP server available on the additional network to configure a dynamic IP for the virtual machine. 9.19.6.1. Configuring an IP address for a new virtual machine using cloud-init You can use cloud-init to configure an IP address when you create a virtual machine. The IP address can be dynamically or statically provisioned. Procedure Create a virtual machine configuration and include the cloud-init network details in the spec.volumes.cloudInitNoCloud.networkData field of the virtual machine configuration: To configure a dynamic IP, specify the interface name and the dhcp4 boolean: kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2 1 The interface name. 2 Uses DHCP to provision an IPv4 address. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 The interface name. 2 The static IP address for the virtual machine. 9.19.7. Viewing the IP address of NICs on a virtual machine You can view the IP address for a network interface controller (NIC) by using the web console or the oc client. The QEMU guest agent displays additional information about the virtual machine's secondary networks. 9.19.7.1. Prerequisites Install the QEMU guest agent on the virtual machine. 9.19.7.2. Viewing the IP address of a virtual machine interface in the CLI The network interface configuration is included in the oc describe vmi <vmi_name> command. You can also view the IP address information by running ip addr on the virtual machine, or by running oc get vmi <vmi_name> -o yaml . Procedure Use the oc describe command to display the virtual machine interface configuration: USD oc describe vmi <vmi_name> Example output ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 9.19.7.3. Viewing the IP address of a virtual machine interface in the web console The IP information is displayed on the VirtualMachine details page for the virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine name to open the VirtualMachine details page. The information for each attached NIC is displayed under IP Address on the Details tab. 9.19.8. Using a MAC address pool for virtual machines The KubeMacPool component provides a MAC address pool service for virtual machine NICs in a namespace. 9.19.8.1. About KubeMacPool KubeMacPool provides a MAC address pool per namespace and allocates MAC addresses for virtual machine NICs from the pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine. Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. KubeMacPool is enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Re-enable KubeMacPool for the namespace by removing the label. 9.19.8.2. Disabling a MAC address pool for a namespace in the CLI Disable a MAC address pool for virtual machines in a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Procedure Add the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. The following example disables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore 9.19.8.3. Re-enabling a MAC address pool for a namespace in the CLI If you disabled KubeMacPool for a namespace and want to re-enable it, remove the mutatevirtualmachines.kubemacpool.io=ignore label from the namespace. Note Earlier versions of OpenShift Virtualization used the label mutatevirtualmachines.kubemacpool.io=allocate to enable KubeMacPool for a namespace. This is still supported but redundant as KubeMacPool is now enabled by default. Procedure Remove the KubeMacPool label from the namespace. The following example re-enables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- 9.20. Virtual machine disks 9.20.1. Storage features Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization. 9.20.1.1. OpenShift Virtualization storage feature matrix Table 9.6. OpenShift Virtualization storage feature matrix Virtual machine live migration Host-assisted virtual machine disk cloning Storage-assisted virtual machine disk cloning Virtual machine snapshots OpenShift Data Foundation: RBD block-mode volumes Yes Yes Yes Yes OpenShift Virtualization hostpath provisioner No Yes No No Other multi-node writable storage Yes [1] Yes Yes [2] Yes [2] Other single-node writable storage No Yes Yes [2] Yes [2] PVCs must request a ReadWriteMany access mode. Storage provider must support both Kubernetes and CSI snapshot APIs Note You cannot live migrate virtual machines that use: A storage class with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Do not set the evictionStrategy field to LiveMigrate for these virtual machines. 9.20.2. Configuring local storage for virtual machines You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use the HPP, you must create an HPP custom resource (CR). 9.20.2.1. Creating a hostpath provisioner with a basic storage pool You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver. Prerequisites The directories specified in spec.storagePools.path must have read/write access. The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Procedure Create an hpp_cr.yaml file with a storagePools stanza as in the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: "/var/myvolumes" 2 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array to which you can add multiple entries. 2 Specify the storage pool directories under this node path. Save the file and exit. Create the HPP by running the following command: USD oc create -f hpp_cr.yaml 9.20.2.1.1. About creating storage classes When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. 9.20.2.1.2. Creating a storage class for the CSI driver with the storagePools stanza You create a storage class custom resource (CR) for the hostpath provisioner (HPP) CSI driver. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 9.20.2.2. About storage pools created with PVC templates If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. The PVC template is based on the spec stanza of the PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 1 This value is only required for block volume mode PVs. You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. You can combine basic storage pools with storage pools created from PVC templates. 9.20.2.2.1. Creating a storage pool with a PVC template You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). Prerequisites The directories specified in spec.storagePools.path must have read/write access. The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Procedure Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: "/var/myvolumes" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array that can contain both basic and PVC template storage pools. 2 Specify the storage pool directories under this node path. 3 Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem . If the volumeMode is Block , the mounting pod creates an XFS file system on the block volume before mounting it. 4 If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName , ensure that the HPP storage class is not the default storage class. 5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. Save the file and exit. Create the HPP with a storage pool by running the following command: USD oc create -f hpp_pvc_template_pool.yaml Additional resources Customizing the storage profile 9.20.3. Creating data volumes When you create a data volume, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) and populates the PVC with your data. You can create a data volume as either a standalone resource or by using a dataVolumeTemplate resource in a virtual machine specification. You create a data volume by using either the PVC API or storage APIs. Important When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . Tip Whenever possible, use the storage API to optimize space allocation and maximize performance. A storage profile is a custom resource that the CDI manages. It provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class. Storage profiles enable you to create data volumes quickly while reducing coding and minimizing potential errors. For recognized storage types, the CDI provides values that optimize the creation of PVCs. However, you can configure automatic settings for a storage class if you customize the storage profile. 9.20.3.1. Creating data volumes using the storage API When you create a data volume using the storage API, the Containerized Data Interface (CDI) optimizes your persistent volume claim (PVC) allocation based on the type of storage supported by your selected storage class. You only have to specify the data volume name, namespace, and the amount of storage that you want to allocate. For example: When using Ceph RBD, accessModes is automatically set to ReadWriteMany , which enables live migration. volumeMode is set to Block to maximize performance. When you are using volumeMode: Filesystem , more space will automatically be requested by the CDI, if required to accommodate file system overhead. In the following YAML, using the storage API requests a data volume with two gigabytes of usable space. The user does not need to know the volumeMode in order to correctly estimate the required persistent volume claim (PVC) size. The CDI chooses the optimal combination of accessModes and volumeMode attributes automatically. These optimal values are based on the type of storage or the defaults that you define in your storage profile. If you want to provide custom values, they override the system-calculated values. Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7 1 The name of the new data volume. 2 Indicate that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the storage API. 6 Specifies the amount of available space that you request for the PVC. 7 Optional: The name of the storage class. If the storage class is not specified, the system default storage class is used. 9.20.3.2. Creating data volumes using the PVC API When you create a data volume using the PVC API, the Containerized Data Interface (CDI) creates the data volume based on what you specify for the following fields: accessModes ( ReadWriteOnce , ReadWriteMany , or ReadOnlyMany ) volumeMode ( Filesystem or Block ) capacity of storage ( 5Gi , for example) In the following YAML, using the PVC API allocates a data volume with a storage capacity of two gigabytes. You specify an access mode of ReadWriteMany to enable live migration. Because you know the values your system can support, you specify Block storage instead of the default, Filesystem . Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9 1 The name of the new data volume. 2 In the source section, pvc indicates that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the PVC API. 6 accessModes is required when using the PVC API. 7 Specifies the amount of space you are requesting for your data volume. 8 Specifies that the destination is a block PVC. 9 Optionally, specify the storage class. If the storage class is not specified, the system default storage class is used. Important When you explicitly allocate a data volume by using the PVC API and you are not using volumeMode: Block , consider file system overhead. File system overhead is the amount of space required by the file system to maintain its metadata. The amount of space required for file system metadata is file system dependent. Failing to account for file system overhead in your storage capacity request can result in an underlying persistent volume claim (PVC) that is not large enough to accommodate your virtual machine disk. If you use the storage API, the CDI will factor in file system overhead and request a larger persistent volume claim (PVC) to ensure that your allocation request is successful. 9.20.3.3. Customizing the storage profile You can specify default parameters by editing the StorageProfile object for the provisioner's storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object. Prerequisites Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail. Note An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by the CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations. Warning If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created. Procedure Edit the storage profile. In this example, the provisioner is not recognized by CDI: USD oc edit -n openshift-cnv storageprofile <storage_class> Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> Provide the needed attribute values in the storage profile: Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. After you save your changes, the selected values appear in the storage profile status element. 9.20.3.3.1. Setting a default cloning strategy using a storage profile You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values: snapshot - This method is used by default when snapshots are configured. This cloning strategy uses a temporary volume snapshot to clone the volume. The storage provisioner must support CSI snapshots. copy - This method uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. csi-clone - This method uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy , which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner's storage class. Note You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section. Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. 3 The default cloning method of your choice. In this example, CSI volume cloning is specified. 9.20.3.4. Additional resources About creating storage classes Overriding the default file system overhead value Cloning a data volume using smart cloning 9.20.4. Reserving PVC space for file system overhead By default, the OpenShift Virtualization reserves space for file system overhead data in persistent volume claims (PVCs) that use the Filesystem volume mode. You can set the percentage to reserve space for this purpose globally and for specific storage classes. 9.20.4.1. How file system overhead affects space for virtual machine disks When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for: The virtual machine disk. The space reserved for file system overhead, such as metadata By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount. You can configure a different overhead value by editing the HCO object. You can change the value globally and you can specify values for specific storage classes. 9.20.4.2. Overriding the default file system overhead value Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead attribute of the HCO object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Open the HCO object for editing by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the spec.filesystemOverhead fields, populating them with your chosen values: ... spec: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2 1 The default file system overhead percentage used for any storage classes that do not already have a set value. For example, global: "0.07" reserves 7% of the PVC for file system overhead. 2 The file system overhead percentage for the specified storage class. For example, mystorageclass: "0.04" changes the default overhead value for PVCs in the mystorageclass storage class to 4%. Save and exit the editor to update the HCO object. Verification View the CDIConfig status and verify your changes by running one of the following commands: To generally verify changes to CDIConfig : USD oc get cdiconfig -o yaml To view your specific changes to CDIConfig : USD oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}' 9.20.5. Configuring CDI to work with namespaces that have a compute resource quota You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions. 9.20.5.1. About CPU and memory quotas in a namespace A resource quota , defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0 . This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. 9.20.5.2. Overriding CPU and memory defaults Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi" Save and exit the editor to update the HyperConverged CR. 9.20.5.3. Additional resources Resource quotas per project 9.20.6. Managing data volume annotations Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 9.20.6.1. Example: Data volume annotations This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation. Multus network annotation example apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: "example.exampleurl.com" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Multus network annotation 9.20.7. Using preallocation for data volumes The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. 9.20.7.1. About preallocation The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: fallocate If the file system supports it, CDI uses the operating system's fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized. full If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed. 9.20.7.2. Enabling preallocation for a data volume You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI ( oc ). Preallocation mode is supported for all CDI source types. Procedure Specify the spec.preallocation field in the data volume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 ... pvc: ... preallocation: true 2 1 All CDI source types support preallocation, however preallocation is ignored for cloning operations. 2 The preallocation field is a boolean that defaults to false. 9.20.8. Uploading local disk images by using the web console You can upload a locally stored disk image file by using the web console. 9.20.8.1. Prerequisites You must have a virtual machine image file in IMG, ISO, or QCOW2 format. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 9.20.8.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.20.8.3. Uploading an image file using the web console Use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . Procedure From the side menu of the web console, click Storage Persistent Volume Claims . Click the Create Persistent Volume Claim drop-down list to expand it. Click With Data Upload Form to open the Upload Data to Persistent Volume Claim page. Click Browse to open the file manager and select the image that you want to upload, or drag the file into the Drag a file here or browse to upload field. Optional: Set this image as the default image for a specific operating system. Select the Attach this data to a virtual machine operating system check box. Select an operating system from the list. The Persistent Volume Claim Name field is automatically filled with a unique name and cannot be edited. Take note of the name assigned to the PVC so that you can identify it later, if necessary. Select a storage class from the Storage Class list. In the Size field, enter the size value for the PVC. Select the corresponding unit of measurement from the drop-down list. Warning The PVC size must be larger than the size of the uncompressed virtual disk. Select an Access Mode that matches the storage class that you selected. Click Upload . 9.20.8.4. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.20.9. Uploading local disk images by using the virtctl tool You can upload a locally stored disk image to a new or existing data volume by using the virtctl command-line utility. 9.20.9.1. Prerequisites Enable the kubevirt-virtctl package. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 9.20.9.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.20.9.3. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 9.20.9.4. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 9.20.9.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.20.9.6. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.20.10. Uploading a local disk image to a block storage data volume You can upload a local disk image into a block data volume by using the virtctl command-line utility. In this workflow, you create a local block device to use as a persistent volume, associate this block volume with an upload data volume, and use virtctl to upload the local disk image into the data volume. 9.20.10.1. Prerequisites Enable the kubevirt-virtctl package. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 9.20.10.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.20.10.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 9.20.10.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 9.20.10.5. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 9.20.10.6. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 9.20.10.7. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.20.10.8. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.20.11. Managing virtual machine snapshots You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (offline) or on (online). You can only restore to a powered off (offline) VM. OpenShift Virtualization supports VM snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 9.20.11.1. About virtual machine snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. With the VM snapshots feature, cluster administrators and application developers can: Create a new snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot 9.20.11.1.1. Virtual machine snapshot controller and custom resource definitions (CRDs) The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 9.20.11.2. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent 9.20.11.3. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 9.20.11.3.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 9.20.11.3.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 9.20.11.4. Creating a virtual machine snapshot in the web console You can create a virtual machine (VM) snapshot by using the web console. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. The VM snapshot only includes disks that meet the following requirements: Must be either a data volume or persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. If the virtual machine is running, click Actions Stop to power it down. Click the Snapshots tab and then click Take Snapshot . Fill in the Snapshot Name and optional Description fields. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you still wish to proceed, select the I am aware of this warning and wish to proceed checkbox. Click Save . 9.20.11.5. Creating a virtual machine snapshot in the CLI You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Kubevirt will coordinate with the QEMU guest agent to create a snapshot of the online VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 1 The name of the new VirtualMachineSnapshot object. 2 The name of the source VM. Create the VirtualMachineSnapshot resource. The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot and updates the status and readyToUse fields of the VirtualMachineSnapshot object. USD oc create -f <my-vmsnapshot>.yaml Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait my-vm my-vmsnapshot --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent . The readyToUse flag must be set to true . USD oc describe vmsnapshot <my-vmsnapshot> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 9.20.11.6. Verifying online snapshot creation with snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites To view indications, you must have attempted to create an online VM snapshot using the CLI or the web console. Procedure Display the output from the snapshot indications by doing one of the following: For snapshots created with the CLI, view indicator output in the VirtualMachineSnapshot object YAML, in the status field. For snapshots created using the web console, click VirtualMachineSnapshot > Status in the Snapshot details screen. Verify the status of your online VM snapshot: Online indicates that the VM was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 9.20.11.7. Restoring a virtual machine from a snapshot in the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. If the virtual machine is running, click Actions Stop to power it down. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Choose one of the following methods to restore a VM snapshot: For the snapshot that you want to use as the source to restore the VM, click Restore . Select a snapshot to open the Snapshot Details screen and click Actions Restore VirtualMachineSnapshot . In the confirmation pop-up window, click Restore to restore the VM to its configuration represented by the snapshot. 9.20.11.8. Restoring a virtual machine from a snapshot in the CLI You can restore an existing virtual machine (VM) to a configuration by using a VM snapshot. You can only restore from an offline VM snapshot. Prerequisites Install the OpenShift CLI ( oc ). Power down the VM you want to restore to a state. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3 1 The name of the new VirtualMachineRestore object. 2 The name of the target VM you want to restore. 3 The name of the VirtualMachineSnapshot object to be used as the source. Create the VirtualMachineRestore resource. The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. USD oc create -f <my-vmrestore>.yaml Verification Verify that the VM is restored to the state represented by the snapshot. The complete flag must be set to true . USD oc get vmrestore <my-vmrestore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 9.20.11.9. Deleting a virtual machine snapshot in the web console You can delete an existing virtual machine snapshot by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Click the Options menu of the virtual machine snapshot that you want to delete and select Delete VirtualMachineSnapshot . In the confirmation pop-up window, click Delete to delete the snapshot. 9.20.11.10. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object. The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. USD oc delete vmsnapshot <my-vmsnapshot> Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 9.20.11.11. Additional resources CSI Volume Snapshots 9.20.12. Moving a local virtual machine disk to a different node Virtual machines that use local volume storage can be moved so that they run on a specific node. You might want to move the virtual machine to a specific node for the following reasons: The current node has limitations to the local storage configuration. The new node is better optimized for the workload of that virtual machine. To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . Note Users without the cluster-admin role require additional user permissions to clone volumes across namespaces. 9.20.12.1. Cloning a local volume to another node You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC). To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume. Note The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails. Prerequisites The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk. Procedure Either create a new local PV on the node, or identify a local PV already on the node: Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01 . kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem 1 The name of the PV. 2 The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 3 The mount path on the node. 4 The name of the node where you want to create the PV. Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration: USD oc get pv <destination-pv> -o yaml The following snippet shows that the PV is on node01 : Example output ... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ... 1 The kubernetes.io/hostname key uses the node hostname to select a node. 2 The hostname of the node. Add a unique label to the PV: USD oc label pv <destination-pv> node=node01 Create a data volume manifest that references the following: The PVC name and namespace of the virtual machine. The label you applied to the PV in the step. The size of the destination PV. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5 1 The name of the new data volume. 2 The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName . 3 The namespace where the source PVC exists. 4 The label that you applied to the PV in the step. 5 The size of the destination PV. Start the cloning operation by applying the data volume manifest to your cluster: USD oc apply -f <clone-datavolume.yaml> The data volume clones the PVC of the virtual machine into the PV on the specific node. 9.20.13. Expanding virtual storage by adding blank disk images You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization. 9.20.13.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.20.13.2. Creating a blank disk image with data volumes You can create a new blank disk image in a persistent volume claim by customizing and deploying a data volume configuration file. Prerequisites At least one available persistent volume. Install the OpenShift CLI ( oc ). Procedure Edit the DataVolume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi Create the blank disk image by running the following command: USD oc create -f <blank-image-datavolume>.yaml 9.20.13.3. Additional resources Configure preallocation mode to improve write performance for data volume operations. 9.20.14. Cloning a data volume using smart-cloning Smart-cloning is a built-in feature of Red Hat OpenShift Data Foundation. Smart-cloning is faster and more efficient than host-assisted cloning. You do not need to perform any action to enable smart-cloning, but you need to ensure your storage environment is compatible with smart-cloning to use this feature. When you create a data volume with a persistent volume claim (PVC) source, you automatically initiate the cloning process. You always receive a clone of the data volume if your environment supports smart-cloning or not. However, you will only receive the performance benefits of smart cloning if your storage provider supports smart-cloning. 9.20.14.1. About smart-cloning When a data volume is smart-cloned, the following occurs: A snapshot of the source persistent volume claim (PVC) is created. A PVC is created from the snapshot. The snapshot is deleted. 9.20.14.2. Cloning a data volume Prerequisites For smart-cloning to occur, the following conditions are required: Your storage provider must support snapshots. The source and target PVCs must be defined to the same storage class. The source and target PVCs share the same volumeMode . The VolumeSnapshotClass object must reference the storage class defined to both the source and target PVCs. Procedure To initiate cloning of a data volume: Create a YAML file for a DataVolume object that specifies the name of the new data volume and the name and namespace of the source PVC. In this example, because you specify the storage API, there is no need to specify accessModes or volumeMode . The optimal values will be calculated for you automatically. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 storage: 4 resources: requests: storage: <2Gi> 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 Specifies allocation with the storage API 5 The size of the new data volume. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 9.20.14.3. Additional resources Cloning the persistent volume claim of a virtual machine disk into a new data volume Configure preallocation mode to improve write performance for data volume operations. Customizing the storage profile 9.20.15. Creating and using boot sources A boot source contains a bootable operating system (OS) and all of the configuration settings for the OS, such as drivers. You use a boot source to create virtual machine templates with specific configurations. These templates can be used to create any number of available virtual machines. Quick Start tours are available in the OpenShift Container Platform web console to assist you in creating a custom boot source, uploading a boot source, and other tasks. Select Quick Starts from the Help menu to view the Quick Start tours. 9.20.15.1. About virtual machines and boot sources Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications. Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. 9.20.15.2. Importing a RHEL image as a boot source You can import a Red Hat Enterprise Linux (RHEL) image as a boot source by specifying a URL for the image. Prerequisites You must have access to a web page with the operating system image. For example: Download Red Hat Enterprise Linux web page with images. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Identify the RHEL template for which you want to configure a boot source and click Add source . In the Add boot source to template window, select URL (creates PVC) from the Boot source type list. Click RHEL download page to access the Red Hat Customer Portal. A list of available installers and images is displayed on the Download Red Hat Enterprise Linux page. Identify the Red Hat Enterprise Linux KVM guest image that you want to download. Right-click Download Now , and copy the URL for the image. In the Add boot source to template window, paste the URL into the Import URL field, and click Save and import . Verification Verify that the template displays a green checkmark in the Boot source column on the Templates page. You can now use this template to create RHEL virtual machines. 9.20.15.3. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Source available on the Templates page. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) URL (creates PVC) Clone (creates PVC) Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the options menu beside a template and select Edit boot source . Click Add disk . In the Add disk window, select Use this disk as a boot source . Enter the disk name and select a Source , for example, Blank (creates PVC) or Use an existing PVC . Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Select a Type , for example, Disk or CD-ROM . Optional: Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs. Note Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. Optional: Clear Apply optimized StorageProfile settings to edit the access mode or volume mode. Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed on the Catalog page. You can use this template to create a virtual machine. 9.20.15.4. Creating a virtual machine from a template with an attached boot source After you add a boot source to a template, you can create a virtual machine from the template. Procedure In the OpenShift Container Platform web console, click Virtualization Catalog in the side menu. Select the updated template and click Quick create VirtualMachine . The VirtualMachine details is displayed with the status Starting . 9.20.15.5. Additional resources Creating virtual machine templates Automatic importing and updating of pre-defined boot sources 9.20.16. Hot plugging virtual disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). 9.20.16.1. About hot plugging virtual disks When you hot plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running. When you hot unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot unplugged. You cannot hot plug or hot unplug container disks. After you hot plug a virtual disk, it remains attached until you detach it, even if you restart the virtual machine. 9.20.16.2. About virtio-scsi In OpenShift Virtualization, each virtual machine (VM) has a virtio-scsi controller so that hot plugged disks can use a scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable: each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. 9.20.16.3. Hot plugging a virtual disk using the CLI Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. Prerequisites You must have a running virtual machine to hot plug a virtual disk. You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a virtual disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. 9.20.16.4. Hot unplugging a virtual disk using the CLI Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running. You must have at least one data volume or persistent volume claim (PVC) available and hot plugged. Procedure Hot unplug a virtual disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> 9.20.16.5. Hot plugging a virtual disk using the web console Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. When you hot plug a virtual disk, it remains attached to the VMI until you unplug it. Prerequisites You must have a running virtual machine to hot plug a virtual disk. Procedure Click Virtualization VirtualMachines from the side menu. Select the running virtual machine to which you want to hot plug a virtual disk. On the VirtualMachine details page, click the Disks tab. Click Add disk . In the Add disk (hot plugged) window, fill in the information for the virtual disk that you want to hot plug. Click Save . 9.20.16.6. Hot unplugging a virtual disk using the web console Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running with a hot plugged disk attached. Procedure Click Virtualization VirtualMachines from the side menu. Select the running virtual machine with the disk you want to hot unplug to open the VirtualMachine details page. On the Disks tab, click the Options menu of the virtual disk that you want to hot unplug. Click Detach . 9.20.17. Using container disks with virtual machines You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage. Important If you use large container disks, I/O traffic might increase, impacting worker nodes. This can lead to unavailable nodes. You can resolve this by: Pruning DeploymentConfig objects Configuring garbage collection 9.20.17.1. About container disks A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones. A container disk can either be imported into a persistent volume claim (PVC) by using a data volume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume. 9.20.17.1.1. Importing a container disk into a PVC by using a data volume Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a data volume. You can then attach the data volume to a virtual machine for persistent storage. 9.20.17.1.2. Attaching a container disk to a virtual machine as a containerDisk volume A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine. Use containerDisk volumes for read-only file systems such as CD-ROMs or for disposable virtual machines. Important Using containerDisk volumes for read-write file systems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly. 9.20.17.2. Preparing a container disk for virtual machines You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a data volume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume. The size of a disk image inside a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites Install podman if it is not already installed. The virtual machine image must be either QCOW2 or RAW format. Procedure Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the virtual machine image in either QCOW2 or RAW format. To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage. 9.20.17.3. Disabling TLS for a container registry to use as insecure registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Log in to the cluster as a user with the cluster-admin role. Procedure Edit the HyperConverged custom resource and add a list of insecure registries to the spec.storageImport.insecureRegistries field. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 9.20.17.4. steps Import the container disk into persistent storage for a virtual machine . Create a virtual machine that uses a containerDisk volume for ephemeral storage. 9.20.18. Preparing CDI scratch space 9.20.18.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.20.18.2. About scratch space The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource. If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used. Note CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs. Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod. 9.20.18.3. CDI operations that require scratch space Type Reason Registry imports CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. Upload image QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. HTTP imports of archived images QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. HTTP imports of authenticated images QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. HTTP imports of custom certificates QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. 9.20.18.4. Defining a storage class You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1 1 If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated. Save and exit your default editor to update the HyperConverged CR. 9.20.18.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.20.18.6. Additional resources Dynamic provisioning 9.20.19. Re-using persistent volumes To re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used. 9.20.19.1. About reclaiming statically provisioned persistent volumes When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage. You can then re-use the PV configuration to create a PV with a different name. Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. 9.20.19.2. Reclaiming statically provisioned persistent volumes Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage. Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage. Procedure Ensure that the reclaim policy of the PV is set to Retain : Check the reclaim policy of the PV: USD oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy' If the persistentVolumeReclaimPolicy is not set to Retain , edit the reclaim policy with the following command: USD oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Ensure that no resources are using the PV: USD oc describe pvc <pvc_name> | grep 'Mounted By:' Remove any resources that use the PVC before continuing. Delete the PVC to release the PV: USD oc delete pvc <pvc_name> Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV: USD oc get pv <pv_name> -o yaml > <file_name>.yaml Delete the PV: USD oc delete pv <pv_name> Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder: USD rm -rf <path_to_share_storage> Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest: Note To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted. USD oc create -f <new_pv_name>.yaml Additional resources Configuring local storage for virtual machines The OpenShift Container Platform Storage documentation has more information on Persistent Storage . 9.20.20. Expanding a virtual machine disk You can enlarge the size of a virtual machine's (VM) disk to provide a greater storage capacity by resizing the disk's persistent volume claim (PVC). However, you cannot reduce the size of a VM disk. 9.20.20.1. Enlarging a virtual machine disk VM disk enlargement makes extra space available to the virtual machine. However, it is the responsibility of the VM owner to decide how to consume the storage. If the disk is a Filesystem PVC, the matching file expands to the remaining size while reserving some space for file system overhead. Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Change the value of spec.resource.requests.storage attribute to a larger size. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 ... 1 The VM disk size that can be increased 9.20.20.2. Additional resources Extending a basic volume in Windows . Extending an existing file system partition without destroying data in Red Hat Enterprise Linux . Extending a logical volume and its file system online in Red Hat Enterprise Linux . 9.20.21. Deleting data volumes You can manually delete a data volume by using the oc command-line interface. Note When you delete a virtual machine, the data volume it uses is automatically deleted. 9.20.21.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared. 9.20.21.2. Listing all data volumes You can list the data volumes in your cluster by using the oc command-line interface. Procedure List all data volumes by running the following command: USD oc get dvs 9.20.21.3. Deleting a data volume You can delete a data volume by using the oc command-line interface (CLI). Prerequisites Identify the name of the data volume that you want to delete. Procedure Delete the data volume by running the following command: USD oc delete dv <datavolume_name> Note This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace. | [
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk",
"oc create -f <vm_manifest_file>.yaml",
"virtctl start <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template:",
"oc edit <object_type> <object_ID>",
"oc apply <object_type> <object_ID>",
"oc edit vm example",
"disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default",
"oc delete vm <vm_name>",
"oc get vmis -A",
"oc delete vmi <vmi_name>",
"ssh-keygen -f <key_file> 1",
"oc create secret generic my-pub-key --from-file=key1=<key_file>.pub",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm spec: running: true template: spec: accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key 1 propagationMethod: configDrive: {} 2",
"virtctl ssh -i <key_file> <vm_username>@<vm_name>",
"virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>:",
"virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> .",
"virtctl console <VMI>",
"virtctl vnc <VMI>",
"virtctl vnc <VMI> -v 4",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: rdpservice 1 namespace: example-namespace 2 spec: ports: - targetPort: 3389 3 protocol: TCP selector: special: key 4 type: NodePort 5",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m",
"oc get node <node_name> -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.24.0 192.168.55.101 <none>",
"%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis -A",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: {} 1",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1",
"metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2",
"metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname",
"metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value",
"metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3",
"certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s",
"error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration",
"apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2",
"oc create -f <file_name>.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'",
"oc create -f pxe-net-conf.yaml",
"interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1",
"devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2",
"networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf",
"oc create -f vmi-pxe-boot.yaml",
"virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created",
"oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running",
"virtctl vnc vmi-pxe-boot",
"virtctl console vmi-pxe-boot",
"ip addr",
"3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff",
"kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2",
"oc apply -f <virtual_machine>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1",
"apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3",
"oc create -f 100-worker-kernel-arg-iommu.yaml",
"oc get MachineConfig",
"lspci -nnv | grep -i nvidia",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"variant: openshift version: 4.11.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci",
"butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml",
"oc apply -f 100-worker-vfiopci.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s",
"lspci -nnk -d 10de:",
"04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1",
"lspci -nnk | grep NVIDIA",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDevicesTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>",
"permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2",
"oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'",
"mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108",
"nvidia-105 nvidia-108 nvidia-217 nvidia-299",
"mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3",
"oc create -f 100-worker-kernel-arg-iommu.yaml",
"oc get MachineConfig",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q",
"oc describe node <node_name>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2",
"lspci -nnk | grep <device_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1",
"oc apply -f <file_name>.yaml",
"lspci | grep watchdog -i",
"echo c > /proc/sysrq-trigger",
"pkill -9 watchdog",
"yum install watchdog",
"#watchdog-device = /dev/watchdog",
"systemctl enable --now watchdog.service",
"oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true 1",
"oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'",
"oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <appropriate_class_name>",
"oc patch storageclass <current_default_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'",
"oc patch storageclass <appropriate_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'",
"oc edit -n openshift-cnv HyperConverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: \"None\" 5",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: false name: rhel8-image-cron",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged spec: status: 1 dataImportCronTemplates: 2 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 3 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 4",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1",
"oc get ns",
"oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>",
"apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}",
"oc create -f vm-fedora-datavolume.yaml",
"oc get pods",
"oc describe dv fedora-dv 1",
"virtctl console vm-fedora-datavolume",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi",
"oc create -f import-pv-datavolume.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4",
"oc create -f <cloner-datavolume>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"",
"oc create -f <vm-clone-datavolumetemplate>.yaml",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5",
"oc create -f <cloner-datavolume>.yaml",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4",
"oc create -f example-vm-ipv6.yaml",
"oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s",
"ssh [email protected] -p 27017",
"ssh fedora@USDNODE_IP -p 30000",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 1 7 }'",
"oc create -f <network-attachment-definition.yaml> 1",
"oc get network-attachment-definition <bridge-network>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3",
"oc apply -f <example-vm.yaml>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6",
"oc apply -f <vm-sriov.yaml> 1",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk",
"oc apply -f <vm_name>.yaml 1",
"apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP",
"oc create -f <service_name>.yaml 1",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_cr.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3",
"oc create -f storageclass_csi.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_pvc_template_pool.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9",
"oc edit -n openshift-cnv storageprofile <storage_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2",
"oc get cdiconfig -o yaml",
"oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: \"example.exampleurl.com\" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2",
"oc create -f <my-vmsnapshot>.yaml",
"oc wait my-vm my-vmsnapshot --for condition=Ready",
"oc describe vmsnapshot <my-vmsnapshot>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3",
"oc create -f <my-vmrestore>.yaml",
"oc get vmrestore <my-vmrestore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <my-vmsnapshot>",
"oc get vmsnapshot",
"kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem",
"oc get pv <destination-pv> -o yaml",
"spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2",
"oc label pv <destination-pv> node=node01",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5",
"oc apply -f <clone-datavolume.yaml>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"oc create -f <blank-image-datavolume>.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 storage: 4 resources: requests: storage: <2Gi> 5",
"oc create -f <cloner-datavolume>.yaml",
"virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]",
"virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>",
"cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF",
"podman build -t <registry>/<container_disk_name>:latest .",
"podman push <registry>/<container_disk_name>:latest",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1",
"oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'",
"oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc describe pvc <pvc_name> | grep 'Mounted By:'",
"oc delete pvc <pvc_name>",
"oc get pv <pv_name> -o yaml > <file_name>.yaml",
"oc delete pv <pv_name>",
"rm -rf <path_to_share_storage>",
"oc create -f <new_pv_name>.yaml",
"oc edit pvc <pvc_name>",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1",
"oc get dvs",
"oc delete dv <datavolume_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/virtual-machines |
33.7. Modifying Existing Printers | 33.7. Modifying Existing Printers To delete an existing printer, select the printer and click the Delete button on the toolbar. The printer is removed from the printer list once you confirm deletion of the printer configuration. To set the default printer, select the printer from the printer list and click the Make Default Printer button in the Settings tab. 33.7.1. The Settings Tab To change printer driver configuration, click the corresponding name in the Printer list and click the Settings tab. You can modify printer settings such as make and model, make a printer the default, print a test page, change the device location (URI), and more. Figure 33.8. Settings Tab | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-printing-edit |
Chapter 90. Enabling authentication using AD User Principal Names in IdM | Chapter 90. Enabling authentication using AD User Principal Names in IdM 90.1. User principal names in an AD forest trusted by IdM As an Identity Management (IdM) administrator, you can allow AD users to use alternative User Principal Names (UPNs) to access resources in the IdM domain. A UPN is an alternative user login that AD users authenticate with in the format of user_name@KERBEROS-REALM . As an AD administrator, you can set alternative values for both user_name and KERBEROS-REALM , since you can configure both additional Kerberos aliases and UPN suffixes in an AD forest. For example, if a company uses the Kerberos realm AD.EXAMPLE.COM , the default UPN for a user is [email protected] . To allow your users to log in using their email addresses, for example user@ example.com , you can configure EXAMPLE.COM as an alternative UPN in AD. Alternative UPNs (also known as enterprise UPNs ) are especially convenient if your company has recently experienced a merge and you want to provide your users with a unified logon namespace. UPN suffixes are only visible for IdM when defined in the AD forest root. As an AD administrator, you can define UPNs with the Active Directory Domain and Trust utility or the PowerShell command line tool. Note To configure UPN suffixes for users, Red Hat recommends to use tools that perform error validation, such as the Active Directory Domain and Trust utility. Red Hat recommends against configuring UPNs through low-level modifications, such as using ldapmodify commands to set the userPrincipalName attribute for users, because Active Directory does not validate those operations. After you define a new UPN on the AD side, run the ipa trust-fetch-domains command on an IdM server to retrieve the updated UPNs. See Ensuring that AD UPNs are up-to-date in IdM . IdM stores the UPN suffixes for a domain in the multi-value attribute ipaNTAdditionalSuffixes of the subtree cn=trusted_domain_name,cn=ad,cn=trusts,dc=idm,dc=example,dc=com . Additional resources How to script UPN suffix setup in AD forest root How to manually modify AD user entries and bypass any UPN suffix validation Trust controllers and trust agents 90.2. Ensuring that AD UPNs are up-to-date in IdM After you add or remove a User Principal Name (UPN) suffix in a trusted Active Directory (AD) forest, refresh the information for the trusted forest on an IdM server. Prerequisites IdM administrator credentials. Procedure Enter the ipa trust-fetch-domains command. Note that a seemingly empty output is expected: Verification Enter the ipa trust-show command to verify that the server has fetched the new UPN. Specify the name of the AD realm when prompted: The output shows that the example.com UPN suffix is now part of the ad.example.com realm entry. 90.3. Gathering troubleshooting data for AD UPN authentication issues Follow this procedure to gather troubleshooting data about the User Principal Name (UPN) configuration from your Active Directory (AD) environment and your IdM environment. If your AD users are unable to log in using alternate UPNs, you can use this information to narrow your troubleshooting efforts. Prerequisites You must be logged in to an IdM Trust Controller or Trust Agent to retrieve information from an AD domain controller. You need root permissions to modify the following configuration files, and to restart IdM services. Procedure Open the /usr/share/ipa/smb.conf.empty configuration file in a text editor. Add the following contents to the file. Save and close the /usr/share/ipa/smb.conf.empty file. Open the /etc/ipa/server.conf configuration file in a text editor. If you do not have that file, create one. Add the following contents to the file. Save and close the /etc/ipa/server.conf file. Restart the Apache webserver service to apply the configuration changes: Retrieve trust information from your AD domain: Review the debugging output and troubleshooting information in the following log files: /var/log/httpd/error_log /var/log/samba/log.* Additional resources Using rpcclient to gather troubleshooting data for AD UPN authentication issues (Red Hat Knowledgebase) | [
"ipa trust-fetch-domains Realm-Name: ad.example.com ------------------------------- No new trust domains were found ------------------------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa trust-show Realm-Name: ad.example.com Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: One-way trust Trust type: Active Directory domain UPN suffixes: example.com",
"[global] log level = 10",
"[global] debug = True",
"systemctl restart httpd",
"ipa trust-fetch-domains <ad.example.com>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/enabling-authentication-using-ad-user-principal-names-in-idm_configuring-and-managing-idm |
Kamelets Reference | Kamelets Reference Red Hat build of Apache Camel K 1.10.9 Kamelets Reference Red Hat build of Apache Camel K Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/index |
Chapter 14. Volume Snapshots | Chapter 14. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 14.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 14.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 14.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/volume-snapshots_rhodf |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.26/making-open-source-more-inclusive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/making-open-source-more-inclusive |
Chapter 21. Using early kdump to capture boot time crashes | Chapter 21. Using early kdump to capture boot time crashes Early kdump is a feature of the kdump mechanism that captures the vmcore file if a system or kernel crash occurs during the early phases of the boot process before the system services start. Early kdump loads the crash kernel and the initramfs of crash kernel in the memory much earlier. A kernel crash can sometimes occur during the early boot phase before the kdump service starts and is able to capture and save the contents of the crashed kernel memory. Therefore, crucial information related to the crash that is important for troubleshooting is lost. To address this problem, you can use the early kdump feature, which is a part of the kdump service. 21.1. Enabling early kdump The early kdump feature sets up the crash kernel and the initial RAM disk image ( initramfs ) to load early enough to capture the vmcore information for an early crash. This helps to eliminate the risk of losing information about the early boot kernel crashes. Prerequisites An active RHEL subscription. A repository containing the kexec-tools package for your system CPU architecture. Fulfilled kdump configuration and targets requirements. For more information see, Supported kdump configurations and targets . Procedure Verify that the kdump service is enabled and active: If kdump is not enabled and running, set all required configurations and verify that kdump service is enabled. Rebuild the initramfs image of the booting kernel with the early kdump functionality: Add the rd.earlykdump kernel command line parameter: Reboot the system to reflect the changes: Verification Verify that rd.earlykdump is successfully added and early kdump feature is enabled: Additional resources The /usr/share/doc/kexec-tools/early-kdump-howto.txt file What is early kdump support and how do I configure it? (Red Hat Knowledgebase) | [
"systemctl is-enabled kdump.service && systemctl is-active kdump.service enabled active",
"dracut -f --add earlykdump",
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"rd.earlykdump\"",
"reboot",
"cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-187.el8.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet rd.earlykdump journalctl -x | grep early-kdump Mar 20 15:44:41 redhat dracut-cmdline[304]: early-kdump is enabled. Mar 20 15:44:42 redhat dracut-cmdline[304]: kexec: loaded early-kdump kernel"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-early-kdump-to-capture-boot-time-crashes_managing-monitoring-and-updating-the-kernel |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/kamelets_reference_for_red_hat_build_of_apache_camel_for_quarkus/pr01 |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. About the MTA plugin for Eclipse You can migrate and modernize applications by using the Migration Toolkit for Applications (MTA) plugin for Eclipse. The MTA plugin analyzes your projects using customizable rulesets, marks issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, if possible. 1.2. About the Migration Toolkit for Applications What is the Migration Toolkit for Applications? Migration Toolkit for Applications (MTA) accelerates large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. In MTA 7.1 and later, when you add an application to the Application Inventory , MTA automatically creates and executes language and technology discovery tasks. Language discovery identifies the programming languages used in the application. Technology discovery identifies technologies, such as Enterprise Java Beans (EJB), Spring, etc. Then, each task assigns appropriate tags to the application, reducing the time and effort you spend manually tagging the application. MTA uses an extensive default questionnaire as the basis for assessing your applications, or you can create your own custom questionnaire, enabling you to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment as the basis for discussions between stakeholders to determine which applications are good candidates for containerization, which require significant work first, and which are not suitable for containerization. MTA analyzes applications by applying one or more rulesets to each application considered to determine which specific lines of that application must be modified before it can be modernized. MTA examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. How does the Migration Toolkit for Applications simplify migration? The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/eclipse_plugin_guide/introduction_eclipse-code-ready-studio-guide |
8.16. Kdump | 8.16. Kdump Use this screen to select whether or not to use Kdump on this system. Kdump is a kernel crash dumping mechanism which, in the event of a system crash, captures information that can be invaluable in determining the cause of the crash. Note that if you enable Kdump , you must reserve a certain amount of system memory for it. As a result, less memory is available for your processes. If you do not want to use Kdump on this system, uncheck Enable kdump . Otherwise, set the amount of memory to reserve for Kdump . You can let the installer reserve a reasonable amount automatically, or you can set any amount manually. When your are satisfied with the settings, click Done to save the configuration and return to the screen. Figure 8.37. Kdump Enablement and Configuration | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-kdump-x86 |
Chapter 1. Red Hat JBoss Web Server for OpenShift | Chapter 1. Red Hat JBoss Web Server for OpenShift The Apache Tomcat 10 component of Red Hat JBoss Web Server (JWS) 6.0 is available as a containerized image that is designed for Red Hat OpenShift. You can use this image to build, scale, and test Java web applications for deployment across hybrid cloud environments. 1.1. Differences between Red Hat JBoss Web Server and JWS for OpenShift JWS for OpenShift images are different from a regular release of Red Hat JBoss Web Server. Consider the following differences between the JWS for OpenShift images and a standard JBoss Web Server deployment: In a JWS for OpenShift image, the /opt/jws-6.0/ directory is the location of JWS_HOME . In a JWS for OpenShift deployment, all load balancing is handled by the OpenShift router rather than the JBoss Core Services mod_cluster connector or mod_jk connector. Additional resources Red Hat JBoss Web Server documentation 1.2. OpenShift image version compatibility and support OpenShift images are tested with different operating system versions, configurations and interface points that represent the most common combination of technologies that Red Hat OpenShift Container Platform customers are using. Additional resources OpenShift Container Platform Tested 3.X Integrations page OpenShift Container Platform Tested 4.X Integrations page 1.3. Supported architectures for JBoss Web Server JBoss Web Server supports the following architectures: AMD64 (x86_64) IBM Z (s390x) in the OpenShift environment IBM Power (ppc64le) in the OpenShift environment ARM64 (aarch64) in the OpenShift environment You can use the JBoss Web Server image for OpenJDK 17 with all supported architectures. For more information about images, see the Red Hat Container Catalog . Additional resources Red Hat Container Catalog 1.4. Health checks for Red Hat container images All OpenShift Container Platform images have a health rating associated with them. You can find the health rating for Red Hat JBoss Web Server by navigating to the Certfied container images page, and then search for JBoss Web Server and select the 6.0 version. You can also perform health checks on an OpenShift container to test the container for liveliness and readiness. Additional resources Monitoring application health by using health checks 1.5. Additional resources (or steps) Support of Red Hat Middleware products and components on Red Hat OpenShift | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/introduction |
Chapter 4. Registering the system for updates using GNOME | Chapter 4. Registering the system for updates using GNOME You must register your system to get software updates for your system. This section explains how you can register your system using GNOME. Prerequisites A valid account with Red Hat customer portal See the Create a Red Hat Login page for new user registration. Activation Key or keys, if you are registering the system with activation key A registration server, if you are registering system using the registration server 4.1. Registering a system using an activation key on GNOME Follow the steps in this procedure to register your system with an activation key. You can get the activation key from your organization administrator. Prerequisites Activation key or keys. See the Activation Keys page for creating new activation keys. Procedure Open the system menu , which is accessible from the upper-right screen corner, and click the Settings icon. In the Details About section, click Register . Select Registration Server . If you are not using the Red Hat server, enter the server address in the URL field. In the Registration Type menu, select Activation Keys . Under Registration Details : Enter your activation keys in the Activation Keys field. Separate your keys by a comma ( , ). Enter the name or ID of your organization in the Organization field. Click Register . 4.2. Unregistering the system using GNOME Follow the steps in this procedure to unregister your system. After unregistering, your system no longer receives software updates. Procedure Open the system menu , which is accessible from the upper-right screen corner, and click the Settings icon. In the Details About section, click Details . The Registration Details screen appears. Click Unregister . A warning appears about the impact of unregistering the system. Click Unregister . 4.3. Additional resources Registering the system and managing subscriptions Creating Red Hat Customer Portal Activation Keys (Red Hat Knowledgebase) Creating and managing activation keys Registering Systems with Activation keys (Red Hat Knowledgebase) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/registering-the-system-for-updates-using-gnome_using-the-desktop-environment-in-rhel-8 |
Chapter 13. Red Hat Quay quota management and enforcement overview | Chapter 13. Red Hat Quay quota management and enforcement overview With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment: Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization. Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit. Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget. 13.1. Quota management architecture With the quota management feature enabled, individual blob sizes are summed at the repository and namespace level. For example, if two tags in the same repository reference the same blob, the size of that blob is only counted once towards the repository total. Additionally, manifest list totals are counted toward the repository total. Important Because manifest list totals are counted toward the repository total, the total quota consumed when upgrading from a version of Red Hat Quay might be reportedly differently in Red Hat Quay 3.9. In some cases, the new total might go over a repository's previously-set limit. Red Hat Quay administrators might have to adjust the allotted quota of a repository to account for these changes. The quota management feature works by calculating the size of existing repositories and namespace with a backfill worker, and then adding or subtracting from the total for every image that is pushed or garbage collected afterwords. Additionally, the subtraction from the total happens when the manifest is garbage collected. Note Because subtraction occurs from the total when the manifest is garbage collected, there is a delay in the size calculation until it is able to be garbage collected. For more information about garbage collection, see Red Hat Quay garbage collection . The following database tables hold the quota repository size, quota namespace size, and quota registry size, in bytes, of a Red Hat Quay repository within an organization: QuotaRepositorySize QuotaNameSpaceSize QuotaRegistrySize The organization size is calculated by the backfill worker to ensure that it is not duplicated. When an image push is initialized, the user's organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, users are notified. For a hard check, the push is stopped. If storage consumption is within configured quota limits, the push is allowed to proceed. Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the QuotaRepositorySize , QuotaNameSpaceSize , and QuotaRegistrySize tables. 13.2. Quota management limitations Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push's critical path. Without this, usage data might drift. The maximum storage quota size is dependent on the selected database: Table 13.1. Worker count environment variables Variable Description Postgres 8388608 TB MySQL 8388608 TB SQL Server 16777216 TB 13.3. Quota management for Red Hat Quay 3.9 If you are upgrading to Red Hat Quay 3.9, you must reconfigure the quota management feature. This is because with Red Hat Quay 3.9, calculation is done differently. As a result, totals prior to Red Hat Quay 3.9 are no longer valid. There are two methods for configuring quota management in Red Hat Quay 3.9, which are detailed in the following sections. Note This is a one time calculation that must be done after you have upgraded to Red Hat Quay 3.9. Superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Red Hat Quay UI and you must use the API instead. 13.3.1. Option A: Configuring quota management for Red Hat Quay 3.9 by adjusting the QUOTA_TOTAL_DELAY feature flag Use the following procedure to recalculate Red Hat Quay 3.9 quota management by adjusting the QUOTA_TOTAL_DELAY feature flag. Note With this recalculation option, the totals appear as 0.00 KB until the allotted time designated for QUOTA_TOTAL_DELAY . Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true 1 The QUOTA_TOTAL_DELAY_SECONDS flag defaults to 1800 seconds, or 30 minutes. This allows Red Hat Quay 3.9 to successfully deploy before the quota management feature begins calculating storage consumption for every blob that has been pushed. Setting this flag to a lower number might result in miscalculation; it must be set to a number that is greater than the time it takes your Red Hat Quay deployment to start. 1800 is the recommended setting, however larger deployments that take longer than 30 minutes to start might require a longer duration than 1800 . Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Additionally, the Backfill Queued indicator should be present. After the allotted time, for example, 30 minutes, refresh your Red Hat Quay deployment page and return to your Organization. Now, the Total Quota Consumed should be present. 13.3.2. Option B: Configuring quota management for Red Hat Quay 3.9 by setting QUOTA_TOTAL_DELAY_SECONDS to 0 Use the following procedure to recalculate Red Hat Quay 3.9 quota management by setting QUOTA_TOTAL_DELAY_SECONDS to 0 . Note Using this option prevents the possibility of miscalculations, however is more time intensive. Use the following procedure for when your Red Hat Quay deployment swaps the FEATURE_QUOTA_MANAGEMENT parameter from false to true . Most users will find xref: Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Redeploy Red Hat Quay and set the QUOTA_BACKFILL flag set to true . For example: QUOTA_BACKFILL: true Note If you choose to disable quota management after it has calculated totals, Red Hat Quay marks those totals as stale. If you re-enable the quota management feature again in the future, those namespaces and repositories are recalculated by the backfill worker. 13.4. Testing quota management for Red Hat Quay 3.9 With quota management configured for Red Hat Quay 3.9, duplicative images are now only counted once towards the repository total. Use the following procedure to test that a duplicative image is only counted once toward the repository total. Prerequisites You have configured quota management for Red Hat Quay 3.9. Procedure Pull a sample image, for example, ubuntu:18.04 , by entering the following command: USD podman pull ubuntu:18.04 Tag the same image twice by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1 USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2 Push the sample image to your organization by entering the following commands: USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1 USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2 On the Red Hat Quay UI, navigate to Organization and click the Repository Name , for example, quota-test/ubuntu . Then, click Tags . There should be two repository tags, tag1 and tag2 , each with the same manifest. For example: However, by clicking on the Organization link, we can see that the Total Quota Consumed is 27.94 MB , meaning that the Ubuntu image has only been accounted for once: If you delete one of the Ubuntu tags, the Total Quota Consumed remains the same. Note If you have configured the Red Hat Quay time machine to be longer than 0 seconds, subtraction will not happen until those tags pass the time machine window. If you want to expedite permanent deletion, see Permanently deleting an image tag in Red Hat Quay 3.9. 13.5. Setting default quota To specify a system-wide default storage quota that is applied to every organization and user, you can use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag. If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings. For more information about the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES flag, see link: 13.6. Establishing quota in Red Hat Quay UI The following procedure describes how you can report storage consumption and establish storage quota limits. Prerequisites A Red Hat Quay registry. A superuser account. Enough storage to meet the demands of quota limitations. Procedure Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab: Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel . Click the Options icon of the organization for which you want to create storage quota limits: Click Configure Quota and enter the initial quota, for example, 10 MB . Then click Apply and Close : Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel: The consumed quota information is also available directly on the Organization page: Initial consumed quota To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota , setting the quota to 100 MB. Click Apply and then Close : Pull a sample image by entering the following command: USD podman pull ubuntu:18.04 Tag the sample image by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 Push the sample image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 On the superuser panel, the quota consumed per organization is displayed: The Organization page shows the total proportion of the quota used by the image: Total Quota Consumed for first image Pull a second sample image by entering the following command: USD podman pull nginx Tag the second image by entering the following command: USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx Push the second image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx The Organization page shows the total proportion of the quota used by each repository in that organization: Total Quota Consumed for each repository Create reject and warning limits: From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota . In the Quota Policy section, with the Action type set to Reject , set the Quota Threshold to 80 and click Add Limit : To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit : Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page: Push an image where the reject limit is exceeded: Because the reject limit (80%) has been set to below the current repository size (~83%), the pushed image is rejected automatically. Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace When limits are exceeded, notifications are displayed in the UI: Quota notifications 13.7. Establishing quota with the Red Hat Quay API When an organization is first created, it does not have a quota applied. Use the /api/v1/organization/{organization}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [] 13.7.1. Setting the quota To set a quota for an organization, POST data to the /api/v1/organization/{orgname}/quota endpoint: .Sample command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq Sample output "Created" 13.7.2. Viewing the quota To see the applied quota, GET data from the /api/v1/organization/{orgname}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [ { "id": 1, "limit_bytes": 10485760, "default_config": false, "limits": [], "default_config_exists": false } ] 13.7.3. Modifying the quota To change the existing quota, in this instance from 10 MB to 100 MB, PUT data to the /api/v1/organization/{orgname}/quota/{quota_id} endpoint: Sample command USD curl -k -X PUT -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq Sample output { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [], "default_config_exists": false } 13.7.4. Pushing images To see the storage consumed, push various images to the organization. 13.7.4.1. Pushing ubuntu:18.04 Push ubuntu:18.04 to the organization from the command line: Sample commands USD podman pull ubuntu:18.04 USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 13.7.4.2. Using the API to view quota usage To view the storage consumed, GET data from the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] } 13.7.4.3. Pushing another image Pull, tag, and push a second image, for example, nginx : Sample commands USD podman pull nginx USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] } To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq Sample output { "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } } 13.7.5. Rejecting pushes using quota limits If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, or warning , users are notified. For a hard check, or reject , the push is terminated. 13.7.5.1. Setting reject and warning limits To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint: Sample reject limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit Sample warning limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit 13.7.5.2. Viewing reject and warning limits To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint: View quota limits USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output for quota limits [ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ] 13.7.5.3. Pushing an image when the reject limit is exceeded In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the push should automatically be rejected. Push a sample image to the organization from the command line: Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace 13.7.5.4. Notifications for limits exceeded When limits are exceeded, a notification appears: Quota notifications 13.8. Calculating the total registry size in Red Hat Quay 3.9 Use the following procedure to queue a registry total calculation. Note This feature is done on-demand, and calculating a registry total is database intensive. Use with caution. Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged in as a Red Hat Quay superuser. Procedure On the Red Hat Quay UI, click your username Super User Admin Panel . In the navigation pane, click Manage Organizations . Click Calculate , to Total Registry Size: 0.00 KB, Updated: Never , Calculation required . Then, click Ok . After a few minutes, depending on the size of your registry, refresh the page. Now, the Total Registry Size should be calculated. For example: 13.9. Permanently deleting an image tag In some cases, users might want to delete an image tag outside of the time machine window. Use the following procedure to manually delete an image tag permanently. Important The results of the following procedure cannot be undone. Use with caution. 13.9.1. Permanently deleting an image tag using the Red Hat Quay v2 UI Use the following procedure to permanently delete an image tag using the Red Hat Quay v2 UI. Prerequisites You have set FEATURE_UI_V2 to true in your config.yaml file. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true In the navigation pane, click Repositories . Click the name of the repository, for example, quayadmin/busybox . Check the box of the image tag that will be deleted, for example, test . Click Actions Permanently Delete . Important This action is permanent and cannot be undone. 13.9.2. Permanently deleting an image tag using the Red Hat Quay legacy UI Use the following procedure to permanently delete an image tag using the Red Hat Quay legacy UI. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true On the Red Hat Quay UI, click Repositories and the name of the repository that contains the image tag you will delete, for example, quayadmin/busybox . In the navigation pane, click Tags . Check the box of the name of the tag you want to delete, for example, test . Click the Actions drop down menu and select Delete Tags Delete Tag . Click Tag History in the navigation pane. On the name of the tag that was just deleted, for example, test , click Delete test under the Permanently Delete category. For example: Permanently delete image tag Important This action is permanent and cannot be undone. | [
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true",
"FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"QUOTA_BACKFILL: true",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman pull nginx",
"podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[]",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 10485760, \"default_config\": false, \"limits\": [], \"default_config_exists\": false } ]",
"curl -k -X PUT -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq",
"{ \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [], \"default_config_exists\": false }",
"podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }",
"podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true'",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq",
"{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/red-hat-quay-quota-management-and-enforcement |
Chapter 7. Resolved issues | Chapter 7. Resolved issues The following notable issues are resolved in Red Hat OpenShift AI. RHOAIENG-19711 - Kueue-controller-manager uses old metrics port after upgrade from 2.16.0 to 2.17.0 Previously, after upgrading, the Kueue Operator continued to use the old port (8080) instead of the new port (8443) for metrics. As a result, the OpenShift console Observe > Targets page showed that the status of the Kueue Operator was Down . This issue is now resolved. RHOAIENG-19261 - The TrustyAI installation might fail due to missing custom resource definitions (CRDs) Previously, when installing or upgrading OpenShift AI, the TrustyAI installation might have failed due to missing InferenceService and ServingRuntime CRDs. As a result, the Trusty AI controller went into the CrashLoopBackOff state. This issue is now resolved. RHOAIENG-18933 - Increased workbench image size can delay workbench startup Previously, as a result of the presence of the kubeflow-training Python SDK in the 2024.2 workbench images, the workbench image size was increased and may have caused a delay when starting the workbench. This issue is now resolved. RHOAIENG-18884 - Enabling NIM account setup is incomplete Previously, when you tried to enable the NVIDIA NIM model serving platform, the odh-model-controller deployment started before the NIM account setup was complete. As a result, the NIM account setup was incomplete and the platform was not enabled. This issue is now resolved. RHOAIENG-18675 - Workbenches component fails after upgrading Previously, when upgrading to OpenShift AI 1, the workbench component did not upgrade correctly. Specifically, BuildConfigs and resources that follow it (for example, RStudio BuildConfigs and ROCm imagestreams ) were not updated, which caused the workbench component reconciliation in the DataScienceCluster to fail. This issue is now resolved. RHOAIENG-15123 (also documented as RHOAIENG-10790 and RHOAIENG-14265 ) - Pipelines schedule might fail after upgrading Previously, when you upgraded to OpenShift AI 1, any data science pipeline scheduled runs that existed before the upgrade might fail to execute, resulting in an error message in the task pod. This issue is now resolved. RHOAIENG-16900 - Space-separated format in serving-runtime arguments can cause deployment failure Previously, when deploying models, using a space-separated format to specify additional serving runtime arguments could cause unrecognized arguments errors. This issue is now resolved. RHOAIENG-16073 - Attribute error when retrieving the job client for a cluster object Previously, when initializing a cluster with the get_cluster method, assigning client = cluster.job_client sometimes resulted in an AttributeError: 'Cluster' object has no attribute '_job_submission_client' error. This issue is now resolved. RHOAIENG-15773 - Cannot add a new model registry user Previously, when managing the permissions of a model registry, you could not add a new user, group, or project as described in Managing model registry permissions . An HTTP request failed error was displayed. This issue is now resolved. RHOAIENG-14197 - Tooltip text for CPU and Memory graphs is clipped and therefore unreadable Previously, when you hovered the cursor over the CPU and Memory graphs in the Top resource-consuming distributed workloads section on the Project metrics tab of the Distributed Workloads Metrics page, the tooltip text was clipped, and therefore unreadable. This issue is now resolved. RHOAIENG-11024 - Resources entries get wiped out after removing opendatahub.io/managed annotation Previously, manually removing the opendatahub.io/managed annotation from any component deployment YAML file might have caused resource entry values in the file to be erased. This issue is now resolved. RHOAIENG-8102 - Incorrect requested resources reported when cluster has multiple cluster queues Previously, when a cluster had multiple cluster queues, the resources requested by all projects was incorrectly reported as zero instead of the true value. This issue is now resolved. RHOAIENG-16484 - vLLM server engine for Gaudi accelerators fails after a period of inactivity Previously, when using the vLLM ServingRuntime with Gaudi accelerators support for KServe model-serving runtime on a cluster equipped with Gaudi hardware, the vLLM server could fail with a TimeoutError message after a period of inactivity where it was not processing continuous inference requests. This issue no longer occurs. RHOAIENG-15033 - Model registry instances do not restart or update after upgrading OpenShift AI Previously, when you upgraded OpenShift AI, existing instances of the model registry component were not updated, which caused the instance pods to use older images than the ones referenced by the operator pod. This issue is now resolved. RHOAIENG-15008 - Error when creating a bias metric from the CLI without a request name Previously, the user interface sometimes displayed an error message when you viewed bias metrics if the requestName parameter was not set. If you used the user interface to view bias metrics, but wanted to configure them through the CLI, you had to specify a requestName parameter within your payload. This issue is now resolved. RHOAIENG-14986 - Incorrect package path causes copy_demo_nbs to fail Previously, the copy_demo_nbs() function of the CodeFlare SDK failed because of an incorrect path to the SDK package. Running this function resulted in a FileNotFound error. This issue is now resolved. RHOAIENG-14552 - Workbench or notebook OAuth proxy fails with FIPS on OpenShift Container Platform 4.16 Previously, when using OpenShift 4.16 or newer in a FIPS-enabled cluster, connecting to a running workbench failed because the connection between the internal component oauth-proxy and the OpenShift ingress failed with a TLS handshake error. When opening a workbench, the browser showed an "Application is not available" screen without any additional diagnostics. This issue is now resolved. RHOAIENG-14095 - The dashboard is temporarily unavailable after the installing OpenShift AI Operator Previously, after you installed the OpenShift AI Operator, the OpenShift AI dashboard was unavailable for approximately three minutes. As a result, a Cannot read properties of undefined page sometimes appeared. This issue is now resolved. RHOAIENG-13633 - Cannot set a serving platform for a project without first deploying a model from outside of the model registry Previously, you could not set a serving platform for a project without first deploying a model from outside of the model registry. You could not deploy a model from a model registry to a project unless the project already had single-model or multi-model serving selected. The only way to select single-model or multi-model serving from the OpenShift AI UI was to first deploy a model or model server from outside the registry. This issue is now resolved. RHOAIENG-545 - Cannot specify a generic default node runtime image in JupyterLab pipeline editor Previously, when you edited an Elyra pipeline in the JupyterLab IDE pipeline editor, and you clicked the PIPELINE PROPERTIES tab, and scrolled to the Generic Node Defaults section and edited the Runtime Image field, your changes were not saved. This issue is now resolved. RHOAIENG-14571 - Data Science Pipelines API Server unreachable in managed IBM Cloud OpenShift OpenShift AI installation Previously, when configuring a data science pipeline server, communication errors that prevented successful interaction with the pipeline server occurred. This issue is now resolved. RHOAIENG-14195 - Ray cluster creation fails when deprecated head_memory parameter is used Previously, if you included the deprecated head_memory parameter in your Ray cluster configuration, the Ray cluster creation failed. This issue is now resolved. RHOAIENG-11895 - Unable to clone a GitHub repo in JupyterLab when configuring a custom CA bundle using |- Previously, if you configured a custom Certificate Authority (CA) bundle in the DSCInitialization (DSCI) object using |- , cloning a repo from JupyterLab failed. This issue is now resolved. RHOAIENG-1132 (previously documented as RHODS-6383 ) - An ImagePullBackOff error message is not displayed when required during the workbench creation process Previously, pods experienced issues pulling container images from the container registry. When an error occurred, the relevant pod entered into an ImagePullBackOff state. During the workbench creation process, if an ImagePullBackOff error occurred, an appropriate message was not displayed. This issue is now resolved. RHOAIENG-13327 - Importer component (dsl.importer) prevents pipelines from running Pipelines could not run when using the data science pipelines importer component, dsl.importer . This issue is now resolved. RHOAIENG-14652 - kfp-client unable to connect to the pipeline server on OpenShift Container Platform 4.16 and later In OpenShift 4.16 and later FIPS clusters, data science pipelines were accessible through the OpenShift AI Dashboard. However, connections to the pipelines API server from the KFP SDK failed due to a TLS handshake error. This issue is now resolved. RHOAIENG-10129 - Notebook and Ray cluster with matching names causes secret resolution failure Previously, if you created a notebook and a Ray cluster that had matching names in the same namespace, one controller failed to resolve its secret because the secret already had an owner. This issue is now resolved. RHOAIENG-7887 - Kueue fails to monitor RayCluster or PyTorchJob resources Previously, when you created a DataScienceCluster CR with all components enabled, the Kueue component was installed before the Ray component and the Training Operator component. As a result, the Kueue component did not monitor RayCluster or PyTorchJob resources. When a user created RayCluster or PyTorchJob resources, Kueue did not control the admission of those resources. This issue is now resolved. RHOAIENG-583 (previously documented as RHODS-8921 and RHODS-6373 ) - You cannot create a pipeline server or start a workbench when cumulative character limit is exceeded When the cumulative character limit of a data science project name and a pipeline server name exceeded 62 characters, you were unable to successfully create a pipeline server. Similarly, when the cumulative character limit of a data science project name and a workbench name exceeded 62 characters, workbenches failed to start. This issue is now resolved. Incorrect logo on dashboard after upgrading Previously, after upgrading from OpenShift AI 2.11 to OpenShift AI 2.12, the dashboard could incorrectly display the Open Data Hub logo instead of the Red Hat OpenShift AI logo. This issue is now resolved. RHOAIENG-11297 - Authentication failure after pipeline run Previously, during the execution of a pipeline run, a connection error could occur due to a certificate authentication failure. This certificate authentication failure could be caused by the use of a multi-line string separator for customCABundle in the default-dsci object, which was not supported by data science pipelines. This issue is now resolved. RHOAIENG-11232 - Distributed workloads: Kueue alerts do not provide runbook link After a Kueue alert fires, the cluster administrator can click Observe Alerting Alerts and click the name of the alert to open its Alert details page. On the Alert details page, the Runbook section now provides a link to the appropriate runbook to help to diagnose and resolve the issues that triggered the alert. Previously, the runbook link was missing. RHOAIENG-10665 - Unable to query Speculating with a draft model for granite model Previously, you could not use speculative decoding on the granite-7b model and granite-7b-accelerator draft model. When querying these models, the queries failed with an internal error. This issue is now resolved. RHOAIENG-9481 - Pipeline runs menu glitches when clicking action menu Previously, when you clicked the action menu (...) to a pipeline run on the Experiments > Experiments and runs page, the menu that appeared was not fully visible, and you had to scroll to see all of the menu items. This issue is now resolved. RHOAIENG-8553 - Workbench created with custom image shows !Deleted flag Previously, if you disabled the internal image registry on your OpenShift cluster and then created a workbench with a custom image that was imported by using the image tag, for example: quay.io/my-wb-images/my-image:tag , a !Deleted flag was shown in the Notebook image column on the Workbenches tab of the Data Science Projects page. If you stopped the workbench, you could not restart it. This issue is now resolved. RHOAIENG-6376 - Pipeline run creation fails after setting pip_index_urls in a pipeline component to a URL that contains a port number and path Previously, when you created a pipeline and set the pip_index_urls value for a component to a URL that contains a port number and path, compiling the pipeline code and then creating a pipeline run could result in an error. This issue is now resolved. RHOAIENG-4240 - Jobs fail to submit to Ray cluster in unsecured environment Previously, when running distributed data science workloads from notebooks in an unsecured OpenShift cluster, a ConnectionError: Failed to connect to Ray error message might be shown. This issue is now resolved. RHOAIENG-9670 - vLLM container intermittently crashes while processing requests Previously, if you deployed a model by using the vLLM ServingRuntime for KServe runtime on the single-model serving platform and also configured tensor-parallel-size , depending on the hardware platform you used, the kserve-container container would intermittently crash while processing requests. This issue is now resolved. RHOAIENG-8043 - vLLM errors during generation with mixtral-8x7b Previously, some models, such as Mixtral-8x7b might have experienced sporadic errors due to a triton issue, such as FileNotFoundError:No such file or directory . This issue is now resolved. RHOAIENG-2974 - Data science cluster cannot be deleted without its associated initialization object Previously, you could not delete a DataScienceCluster (DSC) object if its associated DSCInitialization object (DSCI) did not exist. This issue has now been resolved. RHOAIENG-1205 (previously documented as RHODS-11791) - Usage data collection is enabled after upgrade Previously, the Allow collection of usage data option would activate whenever you upgraded OpenShift AI. Now, you no longer need to manually deselect the Allow collection of usage data option when you upgrade. RHOAIENG-1204 (previously documented as ODH-DASHBOARD-1771 ) - JavaScript error during Pipeline step initializing Previously, the pipeline Run details page stopped working when a run started. This issue has now been resolved. RHOAIENG-582 (previously documented as ODH-DASHBOARD-1335 ) - Rename Edit permission to Contributor On the Permissions tab for a project, the term Edit has been replaced with Contributor to more accurately describe the actions granted by this permission. For a complete list of updates, see the Errata advisory . RHOAIENG-8819 - ibm-granite/granite-3b-code-instruct model fails to deploy on single-model serving platform Previously, if you tried to deploy the ibm-granite/granite-3b-code-instruct model on the single-model serving platform by using the vLLM ServingRuntime for KServe runtime, the model deployment would fail with an error. This issue is now resolved. RHOAIENG-8218 - Cannot log in to a workbench created on an OpenShift 4.15 cluster without OpenShift Container Platform internal image registry When you create a workbench on an OpenShift cluster that does not have the OpenShift Container Platform internal image registry enabled, the workbench starts successfully, but you cannot log in to it. This is a known issue with OpenShift 4.15.x versions earlier than 4.15.15. To resolve this issue, upgrade to OpenShift 4.15.15 or later. RHOAIENG-7346 - Distributed workloads no longer run from existing pipelines after upgrade Previously, if you tried to upgrade to OpenShift AI 2.10, a distributed workload would no longer run from an existing pipeline if the cluster was created only inside the pipeline. This issue is now resolved. RHOAIENG-7209 - Error displays when setting the default pipeline root Previously, if you tried to set the default pipeline root using the data science pipelines SDK or the OpenShift AI user interface, an error would appear. This issue is now resolved. RHOAIENG-6711 - ODH-model-controller overwrites the spec.memberSelectors field in ServiceMeshMemberRoll objects Previously, if you tried to add a project or namespace to a ServiceMeshMemberRoll resource using the spec.memberSelectors field of the ServiceMeshMemberRoll resource, the ODH-model-controller would overwrite the field. This issue is now resolved. RHOAIENG-6649 - An error is displayed when viewing a model on a model server that has no external route defined Previously, if you tried to use the dashboard to deploy a model on a model server that did not have external routes enabled, a t.components is undefined error message would appear while the model creation was in progress. This issue is now resolved. RHOAIENG-3981 - In unsecured environment, the functionality to wait for Ray cluster to be ready gets stuck Previously, when running distributed data science workloads from notebooks in an unsecured OpenShift cluster, the functionality to wait for the Ray cluster to be ready before proceeding ( cluster.wait_ready() ) got stuck even when the Ray cluster was ready. This issue is now resolved. RHOAIENG-2312 - Importing numpy fails in code-server workbench Previously, if you tried to import numpy, your code-server workbench would fail. This issue is now resolved. RHOAIENG-1197 - Cannot create pipeline due to the End date picker in the pipeline run creation page defaulting to NaN values when using Firefox on Linux Previously, if you tried to create a pipeline with a scheduled recurring run using Firefox on Linux, enabling the End Date parameter would result in Not a number (Nan) values for both the date and time. This issue is now resolved. RHOAIENG-1196 (previously documented as ODH-DASHBOARD-2140 ) - Package versions displayed in dashboard do not match installed versions Previously, the dashboard would display inaccurate version numbers for packages such as JupterLab and Notebook. This issue is now resolved. RHOAIENG-880 - Default pipelines service account is unable to create Ray clusters Previously, you could not create Ray clusters using the default pipelines Service Account. This issue is now resolved. RHOAIENG-52 - Token authentication fails in clusters with self-signed certificates Previously, if you used self-signed certificates, and you used the Python codeflare-sdk in a notebook or in a Python script as part of a pipeline, token authentication would fail. This issue is now resolved. RHOAIENG-7312 - Model serving fails during query with token authentication in KServe Previously, if you enabled both the ModelMesh and KServe components in your DataScienceCluster object and added Authorino as an authorization provider, a race condition could occur that resulted in the odh-model-controller pods being rolled out in a state that is appropriate for ModelMesh, but not for KServe and Authorino. In this situation, if you made an inference request to a running model that was deployed using KServe, you saw a 404 - Not Found error. In addition, the logs for the odh-model-controller deployment object showed a Reconciler error message. This issue is now resolved. RHOAIENG-7181 (previously documented as RHOAIENG-6343 )- Some components are set to Removed after installing OpenShift AI Previously, after you installed OpenShift AI, the managementState field for the codeflare , kueue , and ray components was incorrectly set to Removed instead of Managed in the DataScienceCluster custom resource. This issue is now resolved. RHOAIENG-7079 (previously documented as RHOAIENG-6317 ) - Pipeline task status and logs sometimes not shown in OpenShift AI dashboard Previously, when running pipelines by using Elyra, the OpenShift AI dashboard might not show the pipeline task status and logs, even when the related pods had not been pruned and the information was still available in the OpenShift Console. This issue is now resolved. RHOAIENG-7070 (previously documented as RHOAIENG-6709 ) - Jupyter notebook creation might fail when different environment variables specified Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved. RHOAIENG-6853 - Cannot set pod toleration in Elyra pipeline pods Previously, if you set a pod toleration for an Elyra pipeline pod, the toleration did not take effect. This issue is now resolved. RHOAIENG-5314 - Data science pipeline server fails to deploy in fresh cluster due to network policies Previously, if you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved. RHOAIENG-4252 - Data science pipeline server deletion process fails to remove ScheduledWorkFlow resource Previously, the pipeline server deletion process did not remove the ScheduledWorkFlow resource. As a result, new DataSciencePipelinesApplications (DSPAs) did not recognize the redundant ScheduledWorkFlow resource. This issue is now resolved RHOAIENG-3411 (previously documented as RHOAIENG-3378 ) - Internal Image Registry is an undeclared hard dependency for Jupyter notebooks spawn process Previously, before you could start OpenShift AI notebooks and workbenches, you must have already enabled the internal, integrated container image registry in OpenShift. Attempts to start notebooks or workbenches without first enabling the image registry failed with an "InvalidImageName" error. You can now create and use workbenches in OpenShift AI without enabling the internal OpenShift image registry. If you update a cluster to enable or disable the internal image registry, you must recreate existing workbenches for the registry changes to take effect. RHOAIENG-2541 - KServe controller pod experiences OOM because of too many secrets in the cluster Previously, if your OpenShift cluster had a large number of secrets, the KServe controller pod could continually crash due to an out-of-memory (OOM) error. This issue is now resolved. RHOAIENG-1452 - The Red Hat OpenShift AI Add-on gets stuck Previously, the Red Hat OpenShift AI Add-on uninstall did not delete OpenShift AI components when the install was triggered via OCM APIs. This issue is now resolved. RHOAIENG-307 - Removing the DataScienceCluster deletes all OpenShift Serverless CRs Previously, if you deleted the DataScienceCluster custom resource (CR), all OpenShift Serverless CRs (including knative-serving, deployments, gateways, and pods) were also deleted. This issue is now resolved. RHOAIENG-6709 - Jupyter notebook creation might fail when different environment variables specified Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved. RHOAIENG-6701 - Users without cluster administrator privileges cannot access the job submission endpoint of the Ray dashboard Previously, users of the distributed workloads feature who did not have cluster administrator privileges for OpenShift might not have been able to access or use the job submission endpoint of the Ray dashboard. This issue is now resolved. RHOAIENG-6578 - Request without token to a protected inference point not working by default Previously, if you added Authorino as an authorization provider for the single-model serving platform and enabled token authorization for models that you deployed, it was still possible to query the models without specifying the tokens. This issue is now resolved. RHOAIENG-6343 - Some components are set to Removed after installing OpenShift AI Previously, after you installed OpenShift AI, the managementState field for the codeflare , kueue , and ray components was incorrectly set to Removed instead of Managed in the DataScienceCluster custom resource. This issue is now resolved. RHOAIENG-5067 - Model server metrics page does not load for a model server based on the ModelMesh component Previously, data science project names that contained capital letters or spaces could cause issues on the model server metrics page for model servers based on the ModelMesh component. The metrics page might not have received data correctly, resulting in a 400 Bad Request error and preventing the page from loading. This issue is now resolved. RHOAIENG-4966 - Self-signed certificates in a custom CA bundle might be missing from the odh-trusted-ca-bundle configuration map Previously, if you added a custom certificate authority (CA) bundle to use self-signed certificates, sometimes the custom certificates were missing from the odh-trusted-ca-bundle ConfigMap, or the non-reserved namespaces did not contain the odh-trusted-ca-bundle ConfigMap when the ConfigMap was set to managed . This issue is now resolved. RHOAIENG-4938 (previously documented as RHOAIENG-4327 ) - Workbenches do not use the self-signed certificates from centrally configured bundle automatically There are two bundle options to include self-signed certificates in OpenShift AI, ca-bundle.crt and odh-ca-bundle.crt . Previously, workbenches did not automatically use the self-signed certificates from the centrally configured bundle and you had to define environment variables that pointed to your certificate path. This issue is now resolved. RHOAIENG-4572 - Unable to run data science pipelines after install and upgrade in certain circumstances Previously, you were unable to run data science pipelines after installing or upgrading OpenShift AI in the following circumstances: You installed OpenShift AI and you had a valid CA certificate. Within the default-dsci object, you changed the managementState field for the trustedCABundle field to Removed post-installation. You upgraded OpenShift AI from version 2.6 to version 2.8 and you had a valid CA certificate. You upgraded OpenShift AI from version 2.7 to version 2.8 and you had a valid CA certificate. This issue is now resolved. RHOAIENG-4524 - BuildConfig definitions for RStudio images contain occurrences of incorrect branch Previously, the BuildConfig definitions for the RStudio and CUDA - RStudio workbench images pointed to the wrong branch in OpenShift AI. This issue is now resolved. RHOAIENG-3963 - Unnecessary managed resource warning Previously, when you edited and saved the OdhDashboardConfig custom resource for the redhat-ods-applications project, the system incorrectly displayed a Managed resource warning message. This issue is now resolved. RHOAIENG-2542 - Inference service pod does not always get an Istio sidecar Previously, when you deployed a model using the single-model serving platform (which uses KServe), the istio-proxy container could be missing in the resulting pod, even if the inference service had the sidecar.istio.io/inject=true annotation. This issue is now resolved. RHOAIENG-1666 - The Import Pipeline button is prematurely accessible Previously, when you imported a pipeline to a workbench that belonged to a data science project, the Import Pipeline button was accessible before the pipeline server was fully available. This issue is now resolved. RHOAIENG-673 (previously documented as RHODS-12946) - Cannot install from PyPI mirror in disconnected environment or when using private certificates In disconnected environments, Red Hat OpenShift AI cannot connect to the public-facing PyPI repositories, so you must specify a repository inside your network. Previously, if you were using private TLS certificates and a data science pipeline was configured to install Python packages, the pipeline run would fail. This issue is now resolved. RHOAIENG-3355 - OVMS on KServe does not use accelerators correctly Previously, when you deployed a model using the single-model serving platform and selected the OpenVINO Model Server serving runtime, if you requested an accelerator to be attached to your model server, the accelerator hardware was detected but was not used by the model when responding to queries. This issue is now resolved. RHOAIENG-2869 - Cannot edit existing model framework and model path in a multi-model project Previously, when you tried to edit a model in a multi-model project using the Deploy model dialog, the Model framework and Path values did not update. This issue is now resolved. RHOAIENG-2724 - Model deployment fails because fields automatically reset in dialog Previously, when you deployed a model or edited a deployed model, the Model servers and Model framework fields in the "Deploy model" dialog might have reset to the default state. The Deploy button might have remained enabled even though these mandatory fields no longer contained valid values. This issue is now resolved. RHOAIENG-2099 - Data science pipeline server fails to deploy in fresh cluster Previously, when you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved. RHOAIENG-1199 (previously documented as ODH-DASHBOARD-1928 ) - Custom serving runtime creation error message is unhelpful Previously, when you tried to create or edit a custom model-serving runtime and an error occurred, the error message did not indicate the cause of the error. The error messages have been improved. RHOAIENG-556 - ServingRuntime for KServe model is created regardless of error Previously, when you tried to deploy a KServe model and an error occurred, the InferenceService custom resource (CR) was still created and the model was shown in the Data Science Projects page, but the status would always remain unknown. The KServe deploy process has been updated so that the ServingRuntime is not created if an error occurs. RHOAIENG-548 (previously documented as ODH-DASHBOARD-1776 ) - Error messages when user does not have project administrator permission Previously, if you did not have administrator permission for a project, you could not access some features, and the error messages did not explain why. For example, when you created a model server in an environment where you only had access to a single namespace, an Error creating model server error message appeared. However, the model server is still successfully created. This issue is now resolved. RHOAIENG-66 - Ray dashboard route deployed by CodeFlare SDK exposes self-signed certs instead of cluster cert Previously, when you deployed a Ray cluster by using the CodeFlare SDK with the openshift_oauth=True option, the resulting route for the Ray cluster was secured by using the passthrough method and as a result, the self-signed certificate used by the OAuth proxy was exposed. This issue is now resolved. RHOAIENG-12 - Cannot access Ray dashboard from some browsers In some browsers, users of the distributed workloads feature might not have been able to access the Ray dashboard because the browser automatically changed the prefix of the dashboard URL from http to https . This issue is now resolved. RHODS-6216 - The ModelMesh oauth-proxy container is intermittently unstable Previously, ModelMesh pods did not deploy correctly due to a failure of the ModelMesh oauth-proxy container. This issue occurred intermittently and only if authentication was enabled in the ModelMesh runtime environment. This issue is now resolved. RHOAIENG-535 - Metrics graph showing HTTP requests for deployed models is incorrect if there are no HTTP requests Previously, if a deployed model did not receive at least one HTTP request for each of the two data types (success and failed), the graphs that show HTTP request performance metrics (for all models on the model server or for the specific model) rendered incorrectly, with a straight line that indicated a steadily increasing number of failed requests. This issue is now resolved. RHOAIENG-1467 - Serverless net-istio controller pod might hit OOM Previously, the Knative net-istio-controller pod (which is a dependency for KServe) might continuously crash due to an out-of-memory (OOM) error. This issue is now resolved. RHOAIENG-1899 (previously documented as RHODS-6539 ) - The Anaconda Professional Edition cannot be validated and enabled Previously, you could not enable the Anaconda Professional Edition because the dashboard's key validation for it was inoperable. This issue is now resolved. RHOAIENG-2269 - (Single-model) Dashboard fails to display the correct number of model replicas Previously, on a single-model serving platform, the Models and model servers section of a data science project did not show the correct number of model replicas. This issue is now resolved. RHOAIENG-2270 - (Single-model) Users cannot update model deployment settings Previously, you couldn't edit the deployment settings (for example, the number of replicas) of a model you deployed with a single-model serving platform. This issue is now resolved. RHODS-8865 - A pipeline server fails to start unless you specify an Amazon Web Services (AWS) Simple Storage Service (S3) bucket resource Previously, when you created a data connection for a data science project, the AWS_S3_BUCKET field was not designated as a mandatory field. However, if you attempted to configure a pipeline server with a data connection where the AWS_S3_BUCKET field was not populated, the pipeline server failed to start successfully. This issue is now resolved. The Configure pipeline server dialog has been updated to include the Bucket field as a mandatory field. RHODS-12899 - OpenVINO runtime missing annotation for NVIDIA GPUs Previously, if a user selected the OpenVINO model server (supports GPUs) runtime and selected an NVIDIA GPU accelerator in the model server user interface, the system could display a unnecessary warning that the selected accelerator was not compatible with the selected runtime. The warning is no longer displayed. RHOAIENG-84 - Cannot use self-signed certificates with KServe Previously, the single-model serving platform did not support self-signed certificates. This issue is now resolved. To use self-signed certificates with KServe, follow the steps described in Working with certificates . RHOAIENG-164 - Number of model server replicas for Kserve is not applied correctly from the dashboard Previously, when you set a number of model server replicas different from the default (1), the model (server) was still deployed with 1 replica. This issue is now resolved. RHOAIENG-288 - Recommended image version label for workbench is shown for two versions Most of the workbench images that are available in OpenShift AI are provided in multiple versions. The only recommended version is the latest version. In Red Hat OpenShift AI 2.4 and 2.5, the Recommended tag was erroneously shown for multiple versions of an image. This issue is now resolved. RHOAIENG-293 - Deprecated ModelMesh monitoring stack not deleted after upgrading from 2.4 to 2.5 In Red Hat OpenShift AI 2.5, the former ModelMesh monitoring stack was no longer deployed because it was replaced by user workload monitoring. However, the former monitoring stack was not deleted during an upgrade to OpenShift AI 2.5. Some components remained and used cluster resources. This issue is now resolved. RHOAIENG-343 - Manual configuration of OpenShift Service Mesh and OpenShift Serverless does not work for KServe If you installed OpenShift Serverless and OpenShift Service Mesh and then installed Red Hat OpenShift AI with KServe enabled, KServe was not deployed. This issue is now resolved. RHOAIENG-517 - User with edit permissions cannot see created models A user with edit permissions could not see any created models, unless they were the project owner or had admin permissions for the project. This issue is now resolved. RHOAIENG-804 - Cannot deploy Large Language Models with KServe on FIPS-enabled clusters Previously, Red Hat OpenShift AI was not yet fully designed for FIPS. You could not deploy Large Language Models (LLMs) with KServe on FIPS-enabled clusters. This issue is now resolved. RHOAIENG-908 - Cannot use ModelMesh if KServe was previously enabled and then removed Previously, when both ModelMesh and KServe were enabled in the DataScienceCluster object, and you subsequently removed KServe, you could no longer deploy new models with ModelMesh. You could continue to use models that were previously deployed with ModelMesh. This issue is now resolved. RHOAIENG-2184 - Cannot create Ray clusters or distributed workloads Previously, users could not create Ray clusters or distributed workloads in namespaces where they have admin or edit permissions. This issue is now resolved. ODH-DASHBOARD-1991 - ovms-gpu-ootb is missing recommended accelerator annotation Previously, when you added a model server to your project, the Serving runtime list did not show the Recommended serving runtime label for the NVIDIA GPU. This issue is now resolved. RHOAIENG-807 - Accelerator profile toleration removed when restarting a workbench Previously, if you created a workbench that used an accelerator profile that in turn included a toleration, restarting the workbench removed the toleration information, which meant that the restart could not complete. A freshly created GPU-enabled workbench might start the first time, but never successfully restarted afterwards because the generated pod remained forever pending. This issue is now resolved. DATA-SCIENCE-PIPELINES-OPERATOR-294 - Scheduled pipeline run that uses data-passing might fail to pass data between steps, or fail the step entirely A scheduled pipeline run that uses an S3 object store to store the pipeline artifacts might fail with an error such as the following: This issue occurred because the S3 object store endpoint was not successfully passed to the pods for the scheduled pipeline run. This issue is now resolved. RHODS-4769 - GPUs on nodes with unsupported taints cannot be allocated to notebook servers GPUs on nodes marked with any taint other than the supported nvidia.com/gpu taint could not be selected when creating a notebook server. This issue is now resolved. RHODS-6346 - Unclear error message displays when using invalid characters to create a data science project When creating a data science project's data connection, workbench, or storage connection using invalid special characters, the following error message was displayed: The error message failed to clearly indicate the problem. The error message now indicates that invalid characters were entered. RHODS-6950 - Unable to scale down workbench GPUs when all GPUs in the cluster are being used In earlier releases, it was not possible to scale down workbench GPUs if all GPUs in the cluster were being used. This issue applied to GPUs being used by one workbench, and GPUs being used by multiple workbenches. You can now scale down the GPUs by selecting None from the Accelerators list. RHODS-8939 - Default shared memory for a Jupyter notebook created in a release causes a runtime error Starting with release 1.31, this issue is resolved, and the shared memory for any new notebook is set to the size of the node. For a Jupyter notebook created in a release earlier than 1.31, the default shared memory for a Jupyter notebook is set to 64 MB and you cannot change this default value in the notebook configuration. To fix this issue, you must recreate the notebook or follow the process described in the Knowledgebase article How to change the shared memory for a Jupyter notebook in Red Hat OpenShift AI . RHODS-9030 - Uninstall process for OpenShift AI might become stuck when removing kfdefs resources The steps for uninstalling the OpenShift AI managed service are described in Uninstalling OpenShift AI . However, even when you followed this guide, you might have seen that the uninstall process did not finish successfully. Instead, the process stayed on the step of deleting kfdefs resources that were used by the Kubeflow Operator. As shown in the following example, kfdefs resources might exist in the redhat-ods-applications , redhat-ods-monitoring , and rhods-notebooks namespaces: Failed removal of the kfdefs resources might have also prevented later installation of a newer version of OpenShift AI. This issue no longer occurs. RHODS-9764 - Data connection details get reset when editing a workbench When you edited a workbench that had an existing data connection and then selected the Create new data connection option, the edit page might revert to the Use existing data connection option before you had finished specifying the new connection details. This issue is now resolved. RHODS-9583 - Data Science dashboard did not detect an existing OpenShift Pipelines installation When the OpenShift Pipelines Operator was installed as a global operator on your cluster, the OpenShift AI dashboard did not detect it. The OpenShift Pipelines Operator is now detected successfully. ODH-DASHBOARD-1639 - Wrong TLS value in dashboard route Previously, when a route was created for the OpenShift AI dashboard on OpenShift, the tls.termination field had an invalid default value of Reencrypt . This issue is now resolved. The new value is reencrypt . ODH-DASHBOARD-1638 - Name placeholder in Triggered Runs tab shows Scheduled run name Previously, when you clicked Pipelines > Runs and then selected the Triggered tab to configure a triggered run, the example value shown in the Name field was Scheduled run name . This issue is now resolved. ODH-DASHBOARD-1547 - "We can't find that page" message displayed in dashboard when pipeline operator installed in background Previously, when you used the Data Science Pipelines page of the dashboard to install the OpenShift Pipelines Operator, when the Operator installation was complete, the page refreshed to show a We can't find that page message. This issue is now resolved. When the Operator installation is complete, the dashboard redirects you to the Pipelines page, where you can create a pipeline server. ODH-DASHBOARD-1545 - Dashboard keeps scrolling to bottom of project when Models tab is expanded Previously, on the Data Science Projects page of the dashboard, if you clicked the Deployed models tab to expand it and then tried to perform other actions on the page, the page automatically scrolled back to the Deployed models section. This affected your ability to perform other actions. This issue is now resolved. NOTEBOOKS-156 - Elyra included an example runtime called Test Previously, Elyra included an example runtime configuration called Test . If you selected this configuration when running a data science pipeline, you could see errors. The Test configuration has now been removed. RHODS-9622 - Duplicating a scheduled pipeline run does not copy the existing period and pipeline input parameter values Previously, when you duplicated a scheduled pipeline run that had a periodic trigger, the duplication process did not copy the configured execution frequency for the recurring run or the specified pipeline input parameters. This issue is now resolved. RHODS-8932 - Incorrect cron format was displayed by default when scheduling a recurring pipeline run When you scheduled a recurring pipeline run by configuring a cron job, the OpenShift AI interface displayed an incorrect format by default. It now displays the correct format. RHODS-9374 - Pipelines with non-unique names did not appear in the data science project user interface If you launched a notebook from a Jupyter application that supported Elyra, or if you used a workbench, when you submitted a pipeline to be run, pipelines with non-unique names did not appear in the Pipelines section of the relevant data science project page or the Pipelines heading of the data science pipelines page. This issue has now been resolved. RHODS-9329 - Deploying a custom model-serving runtime could result in an error message Previously, if you used the OpenShift AI dashboard to deploy a custom model-serving runtime, the deployment process could fail with an Error retrieving Serving Runtime message. This issue is now resolved. RHODS-9064 - After upgrade, the Data Science Pipelines tab was not enabled on the OpenShift AI dashboard When you upgraded from OpenShift AI 1.26 to OpenShift AI 1.28, the Data Science Pipelines tab was not enabled in the OpenShift AI dashboard. This issue is resolved in OpenShift AI 1.29. RHODS-9443 - Exporting an Elyra pipeline exposed S3 storage credentials in plain text In OpenShift AI 1.28.0, when you exported an Elyra pipeline from JupyterLab in Python DSL format or YAML format, the generated output contained S3 storage credentials in plain text. This issue has been resolved in OpenShift AI 1.28.1. However, after you upgrade to OpenShift AI 1.28.1, if your deployment contains a data science project with a pipeline server and a data connection, you must perform the following additional actions for the fix to take effect: Refresh your browser page. Stop any running workbenches in your deployment and restart them. Furthermore, to confirm that your Elyra runtime configuration contains the fix, perform the following actions: In the left sidebar of JupyterLab, click Runtimes ( ). Hover the cursor over the runtime configuration that you want to view and click the Edit button ( ). The Data Science Pipelines runtime configuration page opens. Confirm that KUBERNETES_SECRET is defined as the value in the Cloud Object Storage Authentication Type field. Close the runtime configuration without changing it. RHODS-8460 - When editing the details of a shared project, the user interface remained in a loading state without reporting an error When a user with permission to edit a project attempted to edit its details, the user interface remained in a loading state and did not display an appropriate error message. Users with permission to edit projects cannot edit any fields in the project, such as its description. Those users can edit only components belonging to a project, such as its workbenches, data connections, and storage. The user interface now displays an appropriate error message and does not try to update the project description. RHODS-8482 - Data science pipeline graphs did not display node edges for running pipelines If you ran pipelines that did not contain Tekton-formatted Parameters or when expressions in their YAML code, the OpenShift AI user interface did not display connecting edges to and from graph nodes. For example, if you used a pipeline containing the runAfter property or Workspaces , the user interface displayed the graph for the executed pipeline without edge connections. The OpenShift AI user interface now displays connecting edges to and from graph nodes. RHODS-8923 - Newly created data connections were not detected when you attempted to create a pipeline server If you created a data connection from within a Data Science project, and then attempted to create a pipeline server, the Configure a pipeline server dialog did not detect the data connection that you created. This issue is now resolved. RHODS-8461 - When sharing a project with another user, the OpenShift AI user interface text was misleading When you attempted to share a Data Science project with another user, the user interface text misleadingly implied that users could edit all of its details, such as its description. However, users can edit only components belonging to a project, such as its workbenches, data connections, and storage. This issue is now resolved and the user interface text no longer misleadingly implies that users can edit all of its details. RHODS-8462 - Users with "Edit" permission could not create a Model Server Users with "Edit" permissions can now create a Model Server without token authorization. Users must have "Admin" permissions to create a Model Server with token authorization. RHODS-8796 - OpenVINO Model Server runtime did not have the required flag to force GPU usage OpenShift AI includes the OpenVINO Model Server (OVMS) model-serving runtime by default. When you configured a new model server and chose this runtime, the Configure model server dialog enabled you to specify a number of GPUs to use with the model server. However, when you finished configuring the model server and deployed models from it, the model server did not actually use any GPUs. This issue is now resolved and the model server uses the GPUs. RHODS-8861 - Changing the host project when creating a pipeline ran resulted in an inaccurate list of available pipelines If you changed the host project while creating a pipeline run, the interface failed to make the pipelines of the new host project available. Instead, the interface showed pipelines that belong to the project you initially selected on the Data Science Pipelines > Runs page. This issue is now resolved. You no longer select a pipeline from the Create run page. The pipeline selection is automatically updated when you click the Create run button, based on the current project and its pipeline. RHODS-8249 - Environment variables uploaded as ConfigMap were stored in Secret instead Previously, in the OpenShift AI interface, when you added environment variables to a workbench by uploading a ConfigMap configuration, the variables were stored in a Secret object instead. This issue is now resolved. RHODS-7975 - Workbenches could have multiple data connections Previously, if you changed the data connection for a workbench, the existing data connection was not released. As a result, a workbench could stay connected to multiple data sources. This issue is now resolved. RHODS-7948 - Uploading a secret file containing environment variables resulted in double-encoded values Previously, when creating a workbench in a data science project, if you uploaded a YAML-based secret file containing environment variables, the environment variable values were not decoded. Then, in the resulting OpenShift secret created by this process, the encoded values were encoded again. This issue is now resolved. RHODS-6429 - An error was displayed when creating a workbench with the Intel OpenVINO or Anaconda Professional Edition images Previously, when you created a workbench with the Intel OpenVINO or Anaconda Professional Edition images, an error appeared during the creation process. However, the workbench was still successfully created. This issue is now resolved. RHODS-6372 - Idle notebook culler did not take active terminals into account Previously, if a notebook image had a running terminal, but no active, running kernels, the idle notebook culler detected the notebook as inactive and stopped the terminal. This issue is now resolved. RHODS-5700 - Data connections could not be created or connected to when creating a workbench When creating a workbench, users were unable to create a new data connection, or connect to existing data connections. RHODS-6281 - OpenShift AI administrators could not access Settings page if an admin group was deleted from cluster Previously, if a Red Hat OpenShift AI administrator group was deleted from the cluster, OpenShift AI administrator users could no longer access the Settings page on the OpenShift AI dashboard. In particular, the following behavior was seen: When an OpenShift AI administrator user tried to access the Settings User management page, a "Page Not Found" error appeared. Cluster administrators did not lose access to the Settings page on the OpenShift AI dashboard. When a cluster administrator accessed the Settings User management page, a warning message appeared, indicating that the deleted OpenShift AI administrator group no longer existed in OpenShift. The deleted administrator group was then removed from OdhDashboardConfig , and administrator access was restored. This issue is now resolved. RHODS-1968 - Deleted users stayed logged in until dashboard was refreshed Previously, when a user's permissions for the Red Hat OpenShift AI dashboard were revoked, the user would notice the change only after a refresh of the dashboard page. This issue is now resolved. When a user's permissions are revoked, the OpenShift AI dashboard locks the user out within 30 seconds, without the need for a refresh. RHODS-6384 - A workbench data connection was incorrectly updated when creating a duplicated data connection When creating a data connection that contained the same name as an existing data connection, the data connection creation failed, but the associated workbench still restarted and connected to the wrong data connection. This issue has been resolved. Workbenches now connect to the correct data connection. RHODS-6370 - Workbenches failed to receive the latest toleration Previously, to acquire the latest toleration, users had to attempt to edit the relevant workbench, make no changes, and save the workbench again. Users can now apply the latest toleration change by stopping and then restarting their data science project's workbench. RHODS-6779 - Models failed to be served after upgrading from OpenShift AI 1.20 to OpenShift AI 1.21 When upgrading from OpenShift AI 1.20 to OpenShift AI 1.21, the modelmesh-serving pod attempted to pull a non-existent image, causing an image pull error. As a result, models were unable to be served using the model serving feature in OpenShift AI. The odh-openvino-servingruntime-container-v1.21.0-15 image now deploys successfully. RHODS-5945 - Anaconda Professional Edition could not be enabled in OpenShift AI Anaconda Professional Edition could not be enabled for use in OpenShift AI. Instead, an InvalidImageName error was displayed in the associated pod's Events page. Anaconda Professional Edition can now be successfully enabled. RHODS-5822 - Admin users were not warned when usage exceeded 90% and 100% for PVCs created by data science projects. Warnings indicating when a PVC exceeded 90% and 100% of its capacity failed to display to admin users for PVCs created by data science projects. Admin users can now view warnings about when a PVC exceeds 90% and 100% of its capacity from the dashboard. RHODS-5889 - Error message was not displayed if a data science notebook was stuck in "pending" status If a notebook pod could not be created, the OpenShift AI interface did not show an error message. An error message is now displayed if a data science notebook cannot be spawned. RHODS-5886 - Returning to the Hub Control Panel dashboard from the data science workbench failed If you attempted to return to the dashboard from your workbench Jupyter notebook by clicking on File Log Out , you were redirected to the dashboard and remained on a "Logging out" page. Likewise, if you attempted to return to the dashboard by clicking on File Hub Control Panel , you were incorrectly redirected to the Start a notebook server page. Returning to the Hub Control Panel dashboard from the data science workbench now works as expected. RHODS-6101 - Administrators were unable to stop all notebook servers OpenShift AI administrators could not stop all notebook servers simultaneously. Administrators can now stop all notebook servers using the Stop all servers button and stop a single notebook by selecting Stop server from the action menu beside the relevant user. RHODS-5891 - Workbench event log was not clearly visible When creating a workbench, users could not easily locate the event log window in the OpenShift AI interface. The Starting label under the Status column is now underlined when you hover over it, indicating you can click on it to view the notebook status and the event log. RHODS-6296 - ISV icons did not render when using a browser other than Google Chrome When using a browser other than Google Chrome, not all ISV icons under Explore and Resources pages were rendered. ISV icons now display properly on all supported browsers. RHODS-3182 - Incorrect number of available GPUs was displayed in Jupyter When a user attempts to create a notebook instance in Jupyter, the maximum number of GPUs available for scheduling was not updated as GPUs are assigned. Jupyter now displays the correct number of GPUs available. RHODS-5890 - When multiple persistent volumes were mounted to the same directory, workbenches failed to start When mounting more than one persistent volume (PV) to the same mount folder in the same workbench, creation of the notebook pod failed and no errors were displayed to indicate there was an issue. RHODS-5768 - Data science projects were not visible to users in Red Hat OpenShift AI Removing the [DSP] suffix at the end of a project's Display Name property caused the associated data science project to no longer be visible. It is no longer possible for users to remove this suffix. RHODS-5701 - Data connection configuration details were overwritten When a data connection was added to a workbench, the configuration details for that data connection were saved in environment variables. When a second data connection was added, the configuration details are saved using the same environment variables, which meant the configuration for the first data connection was overwritten. At the moment, users can add a maximum of one data connection to each workbench. RHODS-5252 - The notebook Administration page did not provide administrator access to a user's notebook server The notebook Administration page, accessed from the OpenShift AI dashboard, did not provide the means for an administrator to access a user's notebook server. Administrators were restricted to only starting or stopping a user's notebook server. RHODS-2438 - PyTorch and TensorFlow images were unavailable when upgrading When upgrading from OpenShift AI 1.3 to a later version, PyTorch and TensorFlow images were unavailable to users for approximately 30 minutes. As a result, users were unable to start PyTorch and TensorFlow notebooks in Jupyter during the upgrade process. This issue has now been resolved. RHODS-5354 - Environment variable names were not validated when starting a notebook server Environment variable names were not validated on the Start a notebook server page. If an invalid environment variable was added, users were unable to successfully start a notebook. The environmental variable name is now checked in real-time. If an invalid environment variable name is entered, an error message displays indicating valid environment variable names must consist of alphabetic characters, digits, _ , - , or . , and must not start with a digit. RHODS-4617 - The Number of GPUs drop-down was only visible if there were GPUs available Previously, the Number of GPUs drop-down was only visible on the Start a notebook server page if GPU nodes were available. The Number of GPUs drop-down now also correctly displays if an autoscaling machine pool is defined in the cluster, even if no GPU nodes are currently available, possibly resulting in the provisioning of a new GPU node on the cluster. RHODS-5420 - Cluster admin did not get administrator access if it was the only user present in the cluster Previously, when the cluster admin was the only user present in the cluster, it did not get Red Hat OpenShift administrator access automatically. Administrator access is now correctly applied to the cluster admin user. RHODS-4321 - Incorrect package version displayed during notebook selection The Start a notebook server page displayed an incorrect version number (11.4 instead of 11.7) for the CUDA notebook image. The version of CUDA installed is no longer specified on this page. RHODS-5001 - Admin users could add invalid tolerations to notebook pods An admin user could add invalid tolerations on the Cluster settings page without triggering an error. If a invalid toleration was added, users were unable to successfully start notebooks. The toleration key is now checked in real-time. If an invalid toleration name is entered, an error message displays indicating valid toleration names consist of alphanumeric characters, - , _ , or . , and must start and end with an alphanumeric character. RHODS-5100 - Group role bindings were not applied to cluster administrators Previously, if you had assigned cluster admin privileges to a group rather than a specific user, the dashboard failed to recognize administrative privileges for users in the administrative group. Group role bindings are now correctly applied to cluster administrators as expected. RHODS-4947 - Old Minimal Python notebook image persisted after upgrade After upgrading from OpenShift AI 1.14 to 1.15, the older version of the Minimal Python notebook persisted, including all associated package versions. The older version of the Minimal Python notebook no longer persists after upgrade. RHODS-4935 - Excessive "missing x-forwarded-access-token header" error messages displayed in dashboard log The rhods-dashboard pod's log contained an excessive number of "missing x-forwarded-access-token header" error messages due to a readiness probe hitting the /status endpoint. This issue has now been resolved. RHODS-2653 - Error occurred while fetching the generated images in the sample Pachyderm notebook An error occurred when a user attempted to fetch an image using the sample Pachyderm notebook in Jupyter. The error stated that the image could not be found. Pachyderm has corrected this issue. RHODS-4584 - Jupyter failed to start a notebook server using the OpenVINO notebook image Jupyter's Start a notebook server page failed to start a notebook server using the OpenVINO notebook image. Intel has provided an update to the OpenVINO operator to correct this issue. RHODS-4923 - A non-standard check box displayed after disabling usage data collection After disabling usage data collection on the Cluster settings page, when a user accessed another area of the OpenShift AI dashboard, and then returned to the Cluster settings page, the Allow collection of usage data check box had a non-standard style applied, and therefore did not look the same as other check boxes when selected or cleared. RHODS-4938 - Incorrect headings were displayed in the Notebook Images page The Notebook Images page, accessed from the Settings page on the OpenShift AI dashboard, displayed incorrect headings in the user interface. The Notebook image settings heading displayed as BYON image settings , and the Import Notebook images heading displayed as Import BYON images . The correct headings are now displayed as expected. RHODS-4818 - Jupyter was unable to display images when the NVIDIA GPU add-on was installed The Start a notebook server page did not display notebook images after installing the NVIDIA GPU add-on. Images are now correctly displayed, and can be started from the Start a notebook server page. RHODS-4797 - PVC usage limit alerts were not sent when usage exceeded 90% and 100% Alerts indicating when a PVC exceeded 90% and 100% of its capacity failed to be triggered and sent. These alerts are now triggered and sent as expected. RHODS-4366 - Cluster settings were reset on operator restart When the OpenShift AI operator pod was restarted, cluster settings were sometimes reset to their default values, removing any custom configuration. The OpenShift AI operator was restarted when a new version of OpenShift AI was released, and when the node that ran the operator failed. This issue occurred because the operator deployed ConfigMaps incorrectly. Operator deployment instructions have been updated so that this no longer occurs. RHODS-4318 - The OpenVINO notebook image failed to build successfully The OpenVINO notebook image failed to build successfully and displayed an error message. This issue has now been resolved. RHODS-3743 - Starburst Galaxy quick start did not provide download link in the instruction steps The Starburst Galaxy quick start, located on the Resources page on the dashboard, required the user to open the explore-data.ipynb notebook , but failed to provide a link within the instruction steps. Instead, the link was provided in the quick start's introduction. RHODS-1974 - Changing alert notification emails required pod restart Changes to the list of notification email addresses in the Red Hat OpenShift AI Add-On were not applied until after the rhods-operator pod and the prometheus-* pod were restarted. RHODS-2738 - Red Hat OpenShift API Management 1.15.2 add-on installation did not successfully complete For OpenShift AI installations that are integrated with the Red Hat OpenShift API Management 1.15.2 add-on, the Red Hat OpenShift API Management installation process did not successfully obtain the SMTP credentials secret. Subsequently, the installation did not complete. RHODS-3237 - GPU tutorial did not appear on dashboard The "GPU computing" tutorial, located at Gtc2018-numba , did not appear on the Resources page on the dashboard. RHODS-3069 - GPU selection persisted when GPU nodes were unavailable When a user provisioned a notebook server with GPU support, and the utilized GPU nodes were subsequently removed from the cluster, the user could not create a notebook server. This occurred because the most recently used setting for the number of attached GPUs was used by default. RHODS-3181 - Pachyderm now compatible with OpenShift Dedicated 4.10 clusters Pachyderm was not initially compatible with OpenShift Dedicated 4.10, and so was not available in OpenShift AI running on an OpenShift Dedicated 4.10 cluster. Pachyderm is now available on and compatible with OpenShift Dedicated 4.10. RHODS-2160 - Uninstall process failed to complete when both OpenShift AI and OpenShift API Management were installed When OpenShift AI and OpenShift API Management are installed together on the same cluster, they use the same Virtual Private Cluster (VPC). The uninstall process for these Add-ons attempts to delete the VPC. Previously, when both Add-ons are installed, the uninstall process for one service was blocked because the other service still had resources in the VPC. The cleanup process has been updated so that this conflict does not occur. RHODS-2747 - Images were incorrectly updated after upgrading OpenShift AI After the process to upgrade OpenShift AI completed, Jupyter failed to update its notebook images. This was due to an issue with the image caching mechanism. Images are now correctly updating after an upgrade. RHODS-2425 - Incorrect TensorFlow and TensorBoard versions displayed during notebook selection The Start a notebook server page displayed incorrect version numbers (2.4.0) for TensorFlow and TensorBoard in the TensorFlow notebook image. These versions have been corrected to TensorFlow 2.7.0 and TensorBoard 2.6.0. RHODS-24339 - Quick start links did not display for enabled applications For some applications, the Open quick start link failed to display on the application tile on the Enabled page. As a result, users did not have direct access to the quick start tour for the relevant application. RHODS-2215 - Incorrect Python versions displayed during notebook selection The Start a notebook server page displayed incorrect versions of Python for the TensorFlow and PyTorch notebook images. Additionally, the third integer of package version numbers is now no longer displayed. RHODS-1977 - Ten minute wait after notebook server start fails If the Jupyter leader pod failed while the notebook server was being started, the user could not access their notebook server until the pod restarted, which took approximately ten minutes. This process has been improved so that the user is redirected to their server when a new leader pod is elected. If this process times out, users see a 504 Gateway Timeout error, and can refresh to access their server. | [
"Bad value for --endpoint-url \"cp\": scheme is missing. Must be of the form http://<hostname>/ or https://<hostname>/",
"the object provided is unrecognized (must be of type Secret): couldn't get version/kind; json parse error: unexpected end of JSON input ({\"apiVersion\":\"v1\",\"kind\":\"Sec ...)",
"oc get kfdefs.kfdef.apps.kubeflow.org -A NAMESPACE NAME AGE redhat-ods-applications rhods-anaconda 3h6m redhat-ods-applications rhods-dashboard 3h6m redhat-ods-applications rhods-data-science-pipelines-operator 3h6m redhat-ods-applications rhods-model-mesh 3h6m redhat-ods-applications rhods-nbc 3h6m redhat-ods-applications rhods-osd-config 3h6m redhat-ods-monitoring modelmesh-monitoring 3h6m redhat-ods-monitoring monitoring 3h6m rhods-notebooks rhods-notebooks 3h6m rhods-notebooks rhods-osd-config 3h5m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/release_notes/resolved-issues_relnotes |
Chapter 3. Deployment | Chapter 3. Deployment As a storage administrator, you can deploy the Ceph Object Gateway using the Ceph Orchestrator with the command line interface or the service specification. You can also configure multi-site Ceph Object Gateways, and remove the Ceph Object Gateway using the Ceph Orchestrator. The cephadm command deploys the Ceph Object Gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multi-site deployment. Note With cephadm , the Ceph Object Gateway daemons are configured using the Ceph Monitor configuration database instead of the ceph.conf file or the command line options. If the configuration is not in the client.rgw section, then the Ceph Object Gateway daemons start up with default settings and bind to port 80 . This section covers the following administrative tasks: Deploying the Ceph Object Gateway using the command line interface . Deploying the Ceph Object Gateway using the service specification . Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator . Removing the Ceph Object Gateway using the Ceph Orchestrator . Using the Ceph Manager rgw module . Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Root-level access to all the nodes. Available nodes on the storage cluster. All the managers, monitors, and OSDs are deployed in the storage cluster. 3.1. Deploying the Ceph Object Gateway using the command line interface Using the Ceph Orchestrator, you can deploy the Ceph Object Gateway with the ceph orch command in the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example You can deploy the Ceph object gateway daemons in three different ways: Method 1 Create realm, zone group, zone, and then use the placement specification with the host name: Create a realm: Syntax Example Create a zone group: Syntax Example Create a zone: Syntax Example Commit the changes: Syntax Example Run the ceph orch apply command: Syntax Example Method 2 Use an arbitrary service name to deploy two Ceph Object Gateway daemons for a single cluster deployment: Syntax Example Method 3 Use an arbitrary service name on a labeled set of hosts: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph object gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 3.2. Deploying the Ceph Object Gateway using the service specification You can deploy the Ceph Object Gateway using the service specification with either the default or the custom realms, zones, and zone groups. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the bootstrapped host. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure As a root user, create a specification file: Example Configure S3 requests to wait for the duration defined in the rgw_exit_timeout_secs parameter for all outstanding requests to complete by setting rgw_graceful_stop to 'true' during Ceph Object gateway shutdown/restart. Syntax Note In containerized deployments, an additional extra_container_agrs configuration of --stop-timeout=120 (or the value of rgw_exit_timeout_secs configuration, if not default) is also necessary in order for it to work as expected with ceph orch stop/restart commands. Edit the radosgw.yml file to include the following details for the default realm, zone, and zone group: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph Object Gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Optional: For custom realm, zone, and zone group, create the resources and then create the radosgw.yml file: Create the custom realm, zone, and zone group: Example Create the radosgw.yml file with the following details: Example Mount the radosgw.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Deploy the Ceph Object Gateway using the service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 3.3. Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator Ceph Orchestrator supports multi-site configuration options for the Ceph Object Gateway. You can configure each object gateway to work in an active-active zone configuration allowing writes to a non-primary zone. The multi-site configuration is stored within a container called a realm. The realm stores zone groups, zones, and a time period. The rgw daemons handle the synchronization eliminating the need for a separate synchronization agent, thereby operating with an active-active configuration. You can also deploy multi-site zones using the command line interface (CLI). Note The following configuration assumes at least two Red Hat Ceph Storage clusters are in geographically separate locations. However, the configuration also works on the same site. Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor and OSD daemons are deployed. Procedure In the cephadm shell, configure the primary zone: Create a realm: Syntax Example If the storage cluster has a single realm, then specify the --default flag. Create a primary zone group: Syntax Example Create a primary zone: Syntax Example Optional: Delete the default zone, zone group, and the associated pools. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Also, removing the default zone group deletes the system user. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Create a system user: Syntax Example Make a note of the access_key and secret_key . Add the access key and system key to the primary zone: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example In the Cephadm shell, configure the secondary zone. Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Syntax Example Optional: Delete the default zone. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the Cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example Optional: Deploy multi-site Ceph Object Gateways using the placement specification: Syntax Example Verification Check the synchronization status to verify the deployment: Example 3.4. Removing the Ceph Object Gateway using the Ceph Orchestrator You can remove the Ceph object gateway daemons using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one Ceph object gateway daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example List the service: Example Remove the service: Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph object gateway using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph object gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 3.5. Using the Ceph Manager rgw module As a storage administrator, you can deploy Ceph Object Gateway, single site and multi-site, using the rgw module. It helps with bootstrapping and configuring Ceph Object realm, zonegroup, and the different related entities. You can use the available tokens for the newly created or existing realms. This token is a base64 string that encapsulates the realm information and its master zone endpoint authentication data. In a multi-site configuration, these tokens can be used to pull a realm to create a secondary zone on a different cluster that syncs with the master zone on the primary cluster by using the rgw zone create command. 3.5.1. Deploying Ceph Object Gateway using the rgw module Bootstrapping Ceph Object Gateway realm creates a new realm entity, a new zonegroup, and a new zone. The rgw module instructs the orchestrator to create and deploy the corresponding Ceph Object Gateway daemons. Enable the rgw module using the ceph mgr module enable rgw command. After enabling the rgw module, either pass the arguments in the command line or use the yaml specification file to bootstrap the realm. Prerequisites A running Red Hat Ceph Storage cluster with at least one OSD deployed. Procedure Log into the Cephadm shell: Example Enable the` rgw`module: Example Bootstrap the Ceph Object Gateway realm using either the command-line or the yaml specification file: Option 1: Use the command-line interface: Syntax Example Option 2: Use the yaml specification file: As a root user, create the yaml file: Syntax Example Optional: You can add the hostnames parameter to the zonegroup during realm bootstrap: Syntax Example Mount the YAML file under a directory in the container: Example Bootstrap the realm: Example Note The specification file used by the rgw module has the same format as the one used by the orchestrator. Thus, you can provide any orchestration supported Ceph Object Gateway parameters including advanced configuration features such as SSL certificates. List the available tokens: Example Note If you run the above command before the Ceph Object Gateway daemons get deployed, it displays a message that there are no tokens as there are no endpoints yet. Verification Verify Object Gateway deployment: Example Verify the hostnames added via realm bootstrap: Syntax Example See the hostnames section of the zonegroup for the list of host names specified in zonegroup_hostnames in the Ceph Object Gateway specification file. 3.5.2. Deploying Ceph Object Gateway multi-site using the rgw module Bootstrapping Ceph Object Gateway realm creates a new realm entity, a new zonegroup, and a new zone. It configures a new system user that can be used for multi-site sync operations. The rgw module instructs the orchestrator to create and deploy the corresponding Ceph Object Gateway daemons. Enable the rgw module using the ceph mgr module enable rgw command. After enabling the rgw module, either pass the arguments in the command line or use the yaml specification file to bootstrap the realm. Prerequisites A running Red Hat Ceph Storage cluster with at least one OSD deployed. Procedure Log into the Cephadm shell: Example Enable the` rgw`module: Example Bootstrap the Ceph Object Gateway realm using either the command-line or the yaml specification file: Option 1: Use the command-line interface: Syntax Example Option 2: Use the yaml specification file: As a root user, create the yaml file: Syntax Example Mount the YAML file under a directory in the container: Example Bootstrap the realm: Example Note The specification file used by the rgw module has the same format as the one used by the orchestrator. Thus, you can provide any orchestration supported Ceph Object Gateway parameters including advanced configuration features such as SSL certificates. List the available tokens: Example Note If you run the above command before the Ceph Object Gateway daemons get deployed, it displays a message that there are no tokens as there are no endpoints yet. Create the secondary zone using these tokens and join the existing realms: As a root user, create the yaml file: Example Mount the zone-spec.yaml file under a directory in the container: Example Enable the` rgw`module on the secondary zone: Example Create the secondary zone: Example Verification Verify Object Gateway multi-site deployment: Example | [
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] [--zonegroup= ZONE_GROUP_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --zonegroup=default --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"label:rgw count-per-host:2\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"touch radosgw.yml",
"ceph config set client.rgw rgw_graceful_stop true ceph config set client.rgw rgw_exit_timeout_secs 120",
"[root@host1 ~]USD cat rgw_spec.yaml service_type: rgw service_id: foo placement: count_per_host: 1 hosts: - rgw_node spec: rgw_frontend_port: 8081 extra_container_args: - \"--stop-timeout=120\"",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_zonegroup: ZONE_GROUP_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_zonegroup: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_zonegroup: test_zonegroup rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --rgw-realm= PRIMARY_REALM --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY --default",
"radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - _HOSTNAME_1_ - _HOSTNAME_2_",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - _hostname1_ - _hostname2_",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - foo - bar",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"ceph orch list --daemon-type=rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myrealm.myzonegroup.ceph-saya-6-osd-host01.eburst ceph-saya-6-osd-host01 *:80 running (111m) 9m ago 111m 82.3M - 17.2.6-22.el9cp 2d5b080de0b0 2f3eaca7e88e",
"radosgw-admin zonegroup get --rgw-zonegroup _zone_group_name_",
"radosgw-admin zonegroup get --rgw-zonegroup my_zonegroup { \"id\": \"02a175e2-7f23-4882-8651-6fbb15d25046\", \"name\": \"my_zonegroup_ck\", \"api_name\": \"my_zonegroup_ck\", \"is_master\": true, \"endpoints\": [ \"http://vm-00:80\" ], \"hostnames\": [ \"foo\" \"bar\" ], \"hostnames_s3website\": [], \"master_zone\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"zones\": [ { \"id\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"name\": \"my_zone_ck\", \"endpoints\": [ \"http://vm-00:80\" ], \"log_meta\": false, \"log_data\": false, \"bucket_index_max_shards\": 11, \"read_only\": false, \"tier_type\": \"\", \"sync_from_all\": true, \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"compress-encrypted\", \"resharding\" ] } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"439e9c37-4ddc-43a3-99e9-ea1f3825bb51\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ] }",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - HOSTNAME_1 - HOSTNAME_2 spec: rgw_frontend_port: PORT_NUMBER zone_endpoints: http:// RGW_HOSTNAME_1 : RGW_PORT_NUMBER_1 , http:// RGW_HOSTNAME_2 : RGW_PORT_NUMBER_2",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02 spec: rgw_frontend_port: 5500 zone_endpoints: http://<rgw_host1>:<rgw_port1>, http://<rgw_host2>:<rgw_port2>",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"cat zone-spec.yaml rgw_zone: my-secondary-zone rgw_realm_token: <token> placement: hosts: - ceph-node-1 - ceph-node-2 spec: rgw_frontend_port: 5500",
"cephadm shell --mount zone-spec.yaml:/var/lib/ceph/radosgw/zone-spec.yaml",
"ceph mgr module enable rgw",
"ceph rgw zone create -i /var/lib/ceph/radosgw/zone-spec.yaml",
"radosgw-admin realm list { \"default_info\": \"d07c00ef-9041-4f6e-8804-7d40240556ae\", \"realms\": [ \"myrealm\" ] }"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/deployment |
function::real_mount | function::real_mount Name function::real_mount - get the 'struct mount' pointer Synopsis Arguments vfsmnt Pointer to 'struct vfsmount' Description Returns the 'struct mount' pointer value for a 'struct vfsmount' pointer. | [
"real_mount:long(vfsmnt:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-real-mount |
Chapter 4. Using Infoblox as DHCP and DNS Providers | Chapter 4. Using Infoblox as DHCP and DNS Providers You can use Capsule Server to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses. The supported Infoblox version is NIOS 8.0 or higher and Satellite 6.11 or higher. 4.1. Infoblox Limitations All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on Capsule and set up the view using the satellite-installer command, you cannot edit the view. Capsule Server communicates with a single Infoblox node using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox. Hosting PXE-related files using Infoblox's TFTP functionality is not supported. You must use Capsule as a TFTP server for PXE provisioning. For more information, see Chapter 3, Configuring Networking . Satellite IPAM feature cannot be integrated with Infoblox. 4.2. Infoblox Prerequisites You must have Infoblox account credentials to manage DHCP and DNS entries in Satellite. Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin . The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API. 4.3. Installing the Infoblox CA Certificate on Capsule Server You must install Infoblox HTTPS CA certificate on the base system for all Capsules that you want to integrate with Infoblox applications. You can download the certificate from the Infoblox web UI, or you can use the following OpenSSL commands to download the certificate: The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate. To test the CA certificate, use a CURL query: Example positive response: Use the following Red Hat Knowledgebase article to install the certificate: How to install a CA certificate on Red Hat Enterprise Linux 6 / 7 . 4.4. Installing the DHCP Infoblox module Use this procedure to install the DHCP Infoblox module on Capsule. Note that you cannot manage records in separate views. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.5, "Installing the DNS Infoblox Module" . DHCP Infoblox Record Type Considerations Use only the --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress option to configure the DHCP and DNS modules. Configuring both DHCP and DNS Infoblox modules with the host record type setting causes DNS conflicts and is not supported. If you install the Infoblox module on Capsule Server with the --foreman-proxy-plugin-dhcp-infoblox-record-type option set to host , you must unset both DNS Capsule and Reverse DNS Capsule options because Infoblox does the DNS management itself. You cannot use the host option without creating conflicts and, for example, being unable to rename hosts in Satellite. Procedure On Capsule, enter the following command: In the Satellite web UI, navigate to Infrastructure > Capsules and select the Capsule with the Infoblox DHCP module and click Refresh . Ensure that the dhcp features are listed. For all domains managed through Infoblox, ensure that the DNS Capsule is set for that domain. To verify, in the Satellite web UI, navigate to Infrastructure > Domains , and inspect the settings of each domain. For all subnets managed through Infoblox, ensure that DHCP Capsule and Reverse DNS Capsule is set. To verify, in the Satellite web UI, navigate to Infrastructure > Subnets , and inspect the settings of each subnet. 4.5. Installing the DNS Infoblox Module Use this procedure to install the DNS Infoblox module on Capsule. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.4, "Installing the DHCP Infoblox module" . DNS records are managed only in the default DNS view, it's not possible to specify which DNS view to use. Procedure On Capsule, enter the following command to configure the Infoblox module: Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify a DNS Infoblox view other than the default view. In the Satellite web UI, navigate to Infrastructure > Capsules and select the Capsule with the Infoblox DNS module and click Refresh . Ensure that the dns features are listed. | [
"update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract",
"curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network",
"[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed false --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-username admin",
"satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-dns-view default --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-username admin"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Using_Infoblox_as_DHCP_and_DNS_Providers_provisioning |
probe::scheduler.balance | probe::scheduler.balance Name probe::scheduler.balance - A cpu attempting to find more work. Synopsis scheduler.balance Values name name of the probe point Context The cpu looking for more work. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-balance |
5.2. OProfile | 5.2. OProfile OProfile is a low overhead, system-wide performance monitoring tool provided by the oprofile package. It uses the performance monitoring hardware on the processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of second-level cache requests, and the number of hardware interrupts received. OProfile is also able to profile applications that run in a Java Virtual Machine (JVM). The following is a selection of the tools provided by OProfile . Note that the legacy opcontrol tool and the new operf tool are mutually exclusive. ophelp Displays available events for the system's processor along with a brief description of each. operf Intended to replace opcontrol . The operf tool uses the Linux Performance Events subsystem, allowing you to target your profiling more precisely, as a single process or system-wide, and allowing OProfile to co-exist better with other tools using the performance monitoring hardware on your system. Unlike opcontrol , no initial setup is required, and it can be used without the root privileges unless the --system-wide option is in use. opimport Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates an annotated source for an executable if the application was compiled with debugging symbols. opreport Retrieves profile data. opcontrol This tool is used to start and stop the OProfile daemon ( oprofiled ) and configure a profile session. oprofiled Runs as a daemon to periodically write sample data to disk. Legacy mode ( opcontrol , oprofiled , and post-processing tools) remains available, but it is no longer the recommended profiling method. For a detailed description of the legacy mode, see the Configuring OProfile Using Legacy Mode chapter in the System Administrator's Guide . 5.2.1. Using OProfile operf is the recommended tool for collecting profiling data. The tool does not require any initial configuration, and all options are passed to it on the command line. Unlike the legacy opcontrol tool, operf can run without root privileges. See the Using operf chapter in the System Administrator's Guide for detailed instructions on how to use the operf tool. Example 5.1. Using operf to Profile a Java Program In the following example, the operf tool is used to collect profiling data from a Java (JIT) program, and the opreport tool is then used to output per-symbol data. Install the demonstration Java program used in this example. It is a part of the java-1.8.0-openjdk-demo package, which is included in the Optional channel. See Enabling Supplementary and Optional Repositories for instructions on how to use the Optional channel. When the Optional channel is enabled, install the package: Install the oprofile-jit package for OProfile to be able to collect profiling data from Java programs: Create a directory for OProfile data: Change into the directory with the demonstration program: Start the profiling: Change into the home directory and analyze the collected data: A sample output may look like the following: | [
"~]# yum install java-1.8.0-openjdk-demo",
"~]# yum install oprofile-jit",
"~]USD mkdir ~/oprofile_data",
"~]USD cd /usr/lib/jvm/java-1.8.0-openjdk/demo/applets/MoleculeViewer/",
"~]USD operf -d ~/oprofile_data appletviewer -J\"-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so\" example2.html",
"~]USD cd",
"~]USD opreport --symbols --threshold 0.5",
"opreport --symbols --threshold 0.5 Using /home/rkratky/oprofile_data/samples/ for samples directory. WARNING! Some of the events were throttled. Throttling occurs when the initial sample rate is too high, causing an excessive number of interrupts. Decrease the sampling frequency. Check the directory /home/rkratky/oprofile_data/samples/current/stats/throttled for the throttled event names. warning: /dm_crypt could not be found. warning: /e1000e could not be found. warning: /kvm could not be found. CPU: Intel Ivy Bridge microarchitecture, speed 3600 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name symbol name 14270 57.1257 libjvm.so /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.51-1.b16.el7_1.x86_64/jre/lib/amd64/server/libjvm.so 3537 14.1593 23719.jo Interpreter 690 2.7622 libc-2.17.so fgetc 581 2.3259 libX11.so.6.3.0 /usr/lib64/libX11.so.6.3.0 364 1.4572 libpthread-2.17.so pthread_getspecific 130 0.5204 libfreetype.so.6.10.0 /usr/lib64/libfreetype.so.6.10.0 128 0.5124 libc-2.17.so __memset_sse2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/oprofile |
Chapter 3. Performing a cluster update | Chapter 3. Performing a cluster update 3.1. Updating a cluster using the CLI You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the OpenShift CLI ( oc ). 3.1.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Have a recent Container Storage Interface (CSI) volume snapshot in case you need to restore persistent volumes due to a pod failure. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . Ensure that you address all Upgradeable=False conditions so the cluster allows an update to the minor version. An alert displays at the top of the Cluster Settings page when you have one or more cluster Operators that cannot be updated. You can still update to the available patch update for the minor release you are currently on. Review the list of APIs that were removed in Kubernetes 1.28, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see Preparing to update to OpenShift Container Platform 4.16 . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 3.1.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.1.3. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources For information on which machine configuration changes require a reboot, see the note in About the Machine Config Operator . 3.1.4. Updating a cluster by using the CLI You can use the OpenShift CLI ( oc ) to review and request cluster updates. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Pause all MachineHealthCheck resources. Procedure View the available updates and note the version number of the update that you want to apply: USD oc adm upgrade Example output Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd Note If there are no recommended updates, updates that have known issues might still be available. See Updating along a conditional update path for more information. For details and information on how to perform a Control Plane Only update, please refer to the Preparing to perform a Control Plane Only update page, listed in the Additional resources section. Based on your organization requirements, set the appropriate update channel. For example, you can set your channel to stable-4.13 or fast-4.13 . For more information about channels, refer to Understanding update channels and releases listed in the Additional resources section. USD oc adm upgrade channel <channel> For example, to set the channel to stable-4.16 : USD oc adm upgrade channel stable-4.16 Important For production clusters, you must subscribe to a stable-* , eus-* , or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. Apply an update: To update to the latest version: USD oc adm upgrade --to-latest=true 1 To update to a specific version: USD oc adm upgrade --to=<version> 1 1 1 <version> is the update version that you obtained from the output of the oc adm upgrade command. Important When using oc adm upgrade --help , there is a listed option for --force . This is heavily discouraged , as using the --force option bypasses cluster-side guards, including release verification and precondition checks. Using --force does not guarantee a successful update. Bypassing guards put the cluster at risk. Review the status of the Cluster Version Operator: USD oc adm upgrade After the update completes, you can confirm that the cluster version has updated to the new version: USD oc adm upgrade Example output Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss. If you are updating your cluster to the minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are updated before deploying workloads that rely on a new feature: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.29.4 ip-10-0-170-223.ec2.internal Ready master 82m v1.29.4 ip-10-0-179-95.ec2.internal Ready worker 70m v1.29.4 ip-10-0-182-134.ec2.internal Ready worker 70m v1.29.4 ip-10-0-211-16.ec2.internal Ready master 82m v1.29.4 ip-10-0-250-100.ec2.internal Ready worker 69m v1.29.4 3.1.5. Retrieving information about a cluster update using oc adm upgrade status (Technology Preview) When updating your cluster, it is useful to understand how your update is progressing. While the oc adm upgrade command returns limited information about the status of your update, this release introduces the oc adm upgrade status command as a Technology Preview feature. This command decouples status information from the oc adm upgrade command and provides specific information regarding a cluster update, including the status of the control plane and worker node updates. The oc adm upgrade status command is read-only and will never alter any state in your cluster. Important The oc adm upgrade status command is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The oc adm upgrade status command can be used for clusters from version 4.12 up to the latest supported release. Important While your cluster does not need to be a Technology Preview-enabled cluster, you must enable the OC_ENABLE_CMD_UPGRADE_STATUS Technology Preview environment variable, otherwise the OpenShift CLI ( oc ) will not recognize the command and you will not be able to use the feature. Procedure Set the OC_ENABLE_CMD_UPGRADE_STATUS environmental variable to true by running the following command: USD export OC_ENABLE_CMD_UPGRADE_STATUS=true Run the oc adm upgrade status command: USD oc adm upgrade status Example 3.1. Example output for an update progressing successfully = Control Plane = Assessment: Progressing Target Version: 4.14.1 (from 4.14.0) Completion: 97% Duration: 54m Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Upgrade = = Worker Pool = Worker Pool: worker Assessment: Progressing Completion: 0% Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Pool = Worker Pool: infra Assessment: Progressing Completion: 0% Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Node NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m = Update Health = SINCE LEVEL IMPACT MESSAGE 14m4s Info None Update is proceeding well With this information, you can make informed decisions on how to proceed with your update. Additional resources Performing a Control Plane Only update Updating along a conditional update path Understanding update channels and releases 3.1.6. Updating along a conditional update path You can update along a recommended conditional update path using the web console or the OpenShift CLI ( oc ). When a conditional update is not recommended for your cluster, you can update along a conditional update path using the OpenShift CLI ( oc ) 4.10 or later. Procedure To view the description of the update when it is not recommended because a risk might apply, run the following command: USD oc adm upgrade --include-not-recommended If the cluster administrator evaluates the potential known risks and decides it is acceptable for the current cluster, then the administrator can waive the safety guards and proceed the update by running the following command: USD oc adm upgrade --allow-not-recommended --to <version> <.> <.> <version> is the update version that you obtained from the output of the command, which is supported but also has known issues or risks. Additional resources Understanding update channels and releases 3.1.7. Changing the update server by using the CLI Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. The default value for upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Procedure Change the upstream parameter value in the cluster version: USD oc patch clusterversion/version --patch '{"spec":{"upstream":"<update-server-url>"}}' --type=merge The <update-server-url> variable specifies the URL for the update server. Example output clusterversion.config.openshift.io/version patched 3.2. Updating a cluster using the web console You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the web console. Note Use the web console or oc adm upgrade channel <channel> to change the update channel. You can follow the steps in Updating a cluster using the CLI to complete the update after you change to a 4.16 channel. 3.2.1. Before updating the OpenShift Container Platform cluster Before updating, consider the following: You have recently backed up etcd. In PodDisruptionBudget , if minAvailable is set to 1 , the nodes are drained to apply pending machine configs that might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. You might need to update the cloud provider resources for the new release if your cluster uses manually maintained credentials. You must review administrator acknowledgement requests, take any recommended actions, and provide the acknowledgement when you are ready. You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. 3.2.2. Changing the update server by using the web console Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Navigate to Administration Cluster Settings , click version . Click the YAML tab and then edit the upstream parameter value: Example output ... spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1 ... 1 The <update-server-url> variable specifies the URL for the update server. The default upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Click Save . Additional resources Understanding update channels and releases 3.2.3. Pausing a MachineHealthCheck resource by using the web console During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Compute MachineHealthChecks . To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each MachineHealthCheck resource. For example, to add the annotation to the machine-api-termination-handler resource, complete the following steps: Click the Options menu to the machine-api-termination-handler and click Edit annotations . In the Edit annotations dialog, click Add more . In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and click Save . 3.2.4. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.16 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Additional resources Updating installed Operators 3.2.5. Viewing conditional updates in the web console You can view and assess the risks associated with particular updates with conditional updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing an advanced update strategy, such as a canary rollout, an EUS update, or a control-plane update. Procedure From the web console, click Administration Cluster settings page and review the contents of the Details tab. You can enable the Include versions with known issues feature in the Select new version dropdown of the Update cluster modal to populate the dropdown list with conditional updates. Note If a version with known issues is selected, more information is provided with potential risks that are associated with the version. Review the notification detailing the potential risks to updating. Additional resources Updating installed Operators Update recommendations and Conditional Updates 3.2.6. Performing a canary rollout update In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to: You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update. You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows. The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start. The rolling update process described in this topic involves: Creating one or more custom machine config pools (MCPs). Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs. Pausing those custom MCPs, which prevents updates to those nodes. Performing the cluster update. Unpausing one custom MCP, which triggers the update on those nodes. Testing the applications on those nodes to make sure the applications work as expected on those newly-updated nodes. Optionally removing the custom labels from the remaining nodes in small batches and testing the applications on those nodes. Note Pausing an MCP should be done with careful consideration and for short periods of time only. If you want to use the canary rollout update process, see Performing a canary rollout update . 3.2.7. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources About the Machine Config Operator . 3.3. Performing a Control Plane Only update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform <4.y> to <4.y+1>, and then to <4.y+2>. You cannot update from OpenShift Container Platform <4.y> to <4.y+2> directly. However, administrators who want to update between two even-numbered minor versions can do so incurring only a single reboot of non-control plane hosts. Important This update was previously known as an EUS-to-EUS update and is now referred to as a Control Plane Only update. These updates are only viable between even-numbered minor versions of OpenShift Container Platform. There are several caveats to consider when attempting a Control Plane Only update. Control Plane Only updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after updating to the odd-numbered minor version but before updating to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. Until the machine config pools are unpaused and the update is complete, some features and bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do Control Plane Only updates with pools paused. 3.3.1. Performing a Control Plane Only update The following procedure pauses all non-master machine config pools and performs updates from OpenShift Container Platform <4.y> to <4.y+1> to <4.y+2>, then unpauses the previously paused machine config pools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform <4.y+1> and <4.y+2> Review the release notes and product lifecycles for any layered products and Operator Lifecycle Manager (OLM) Operators. Some may require updates either before or during a Control Plane Only update. Ensure that you are familiar with version-specific prerequisites, such as the removal of deprecated APIs, that are required prior to updating from OpenShift Container Platform <4.y+1> to <4.y+2>. 3.3.1.1. Performing a Control Plane Only update using the web console Prerequisites Verify that machine config pools are unpaused. Have access to the web console as a user with admin privileges. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of Up to date and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, click Compute MachineConfigPools and review the contents of the Update status column. Note If your machine config pools have an Updating status, please wait for this status to change to Up to date . This process could take several minutes. Set your channel to eus-<4.y+2> . To set your channel, click Administration Cluster Settings Channel . You can edit your channel by clicking on the current hyperlinked channel. Pause all worker machine pools except for the master pool. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to pause and click Pause updates . Update to version <4.y+1> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. If necessary, update your OLM Operators by using the Administrator perspective on the web console. You can find more information on how to perform these actions in "Updating installed Operators"; see "Additional resources". Update to version <4.y+2> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+2> update is complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Unpause all previously paused machine config pools. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to unpause and click Unpause updates . Important If pools are paused, the cluster is not permitted to upgrade to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that your cluster has completed the update to version <4.y+2>. You can verify that your pools have updated on the MachineConfigPools tab under the Compute page by confirming that the Update status has a value of Up to date . Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. You can verify that your cluster has completed the update by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Additional resources Updating installed Operators Updating a cluster by using the web console Updating a cluster that includes RHEL compute machines 3.3.1.2. Performing a Control Plane Only update using the CLI Prerequisites Verify that machine config pools are unpaused. Update the OpenShift CLI ( oc ) to the target version before each update. Important It is highly discouraged to skip this prerequisite. If the OpenShift CLI ( oc ) is not updated to the target version before your update, unexpected issues may occur. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of UPDATED and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-<4.y+2> channel by running the following command: USD oc adm upgrade channel eus-<4.y+2> Note If you receive an error message indicating that eus-<4.y+2> is not one of the available channels, this indicates that Red Hat is still rolling out EUS version updates. This rollout process generally takes 45-90 days starting at the GA date. Pause all worker machine pools except for the master pool by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Note You cannot pause the master pool. Update to the latest version by running the following command: USD oc adm upgrade --to-latest Example output Updating to latest version <4.y+1.z> Review the cluster version to ensure that the updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... Update to version <4.y+2> by running the following command: USD oc adm upgrade --to-latest Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+2.z> ... To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Important If pools are not unpaused, the cluster is not permitted to update to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that the update to version <4.y+2> is complete by running the following command: USD oc get mcp Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. Example output NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False Additional resources Updating installed Operators Updating a cluster that includes RHEL compute machines 3.3.1.3. Performing a Control Plane Only update for layered products and Operators installed through Operator Lifecycle Manager In addition to the Control Plane Only update steps mentioned for the web console and CLI, there are additional steps to consider when performing Control Plane Only updates for clusters with the following: Layered products Operators installed through Operator Lifecycle Manager (OLM) What is a layered product? Layered products refer to products that are made of multiple underlying products that are intended to be used together and cannot be broken into individual subscriptions. For examples of layered OpenShift Container Platform products, see Layered Offering On OpenShift . As you perform a Control Plane Only update for the clusters of layered products and those of Operators that have been installed through OLM, you must complete the following: You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Confirm the cluster version compatibility between the current and intended Operator versions. You can verify which versions your OLM Operators are compatible with by using the Red Hat OpenShift Container Platform Operator Update Information Checker . As an example, here are the steps to perform a Control Plane Only update from <4.y> to <4.y+2> for OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information about how to update clusters through your desired interface, see Performing a Control Plane Only update using the web console and Performing a Control Plane Only update using the CLI in "Additional resources". Example workflow Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update ODF <4.y> ODF <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to ODF <4.y+2>. Unpause the worker machine pools. Note The update to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. Additional resources Updating installed Operators Performing a Control Plane Only update using the web console Performing a Control Plane Only update using the CLI Preventing workload updates during a Control Plane Only update 3.4. Performing a canary rollout update A canary update is an update strategy where worker node updates are performed in discrete, sequential stages instead of updating all worker nodes at the same time. This strategy can be useful in the following scenarios: You want a more controlled rollout of worker node updates to ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. You want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. You want to fit worker node updates, which often require a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. In these scenarios, you can create multiple custom machine config pools (MCPs) to prevent certain worker nodes from updating when you update the cluster. After the rest of the cluster is updated, you can update those worker nodes in batches at appropriate times. 3.4.1. Example Canary update strategy The following example describes a canary update strategy where you have a cluster with 100 nodes with 10% excess capacity, you have maintenance windows that must not exceed 4 hours, and you know that it takes no longer than 8 minutes to drain and reboot a worker node. Note The values are an example only. The time it takes to drain a node might vary depending on factors such as workloads. Defining custom machine config pools In order to organize the worker node updates into separate stages, you can begin by defining the following MCPs: workerpool-canary with 10 nodes workerpool-A with 30 nodes workerpool-B with 30 nodes workerpool-C with 30 nodes Updating the canary worker pool During your first maintenance window, you pause the MCPs for workerpool-A , workerpool-B , and workerpool-C , and then initiate the cluster update. This updates components that run on top of OpenShift Container Platform and the 10 nodes that are part of the unpaused workerpool-canary MCP. The other three MCPs are not updated because they were paused. Determining whether to proceed with the remaining worker pool updates If for some reason you determine that your cluster or workload health was negatively affected by the workerpool-canary update, you then cordon and drain all nodes in that pool while still maintaining sufficient capacity until you have diagnosed and resolved the problem. When everything is working as expected, you evaluate the cluster and workload health before deciding to unpause, and thus update, workerpool-A , workerpool-B , and workerpool-C in succession during each additional maintenance window. Managing worker node updates using custom MCPs provides flexibility, however it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that might affect the entire cluster. It is recommended that you carefully consider your organizational needs and carefully plan the implementation of the process before you start. Important Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatically renew the certificate, the MCO cannot push the newly rotated certificates to those nodes. This causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. Note It is not recommended to update the MCPs to different OpenShift Container Platform versions. For example, do not update one MCP from 4.y.10 to 4.y.11 and another to 4.y.12. This scenario has not been tested and might result in an undefined cluster state. 3.4.2. About the canary rollout update process and MCPs In OpenShift Container Platform, nodes are not considered individually. Instead, they are grouped into machine config pools (MCPs). By default, nodes in an OpenShift Container Platform cluster are grouped into two MCPs: one for the control plane nodes and one for the worker nodes. An OpenShift Container Platform update affects all MCPs concurrently. During the update, the Machine Config Operator (MCO) drains and cordons all nodes within an MCP up to the specified maxUnavailable number of nodes, if a max number is specified. By default, maxUnavailable is set to 1 . Draining and cordoning a node deschedules all pods on the node and marks the node as unschedulable. After the node is drained, the Machine Config Daemon applies a new machine configuration, which can include updating the operating system (OS). Updating the OS requires the host to reboot. Using custom machine config pools To prevent specific nodes from being updated, you can create custom MCPs. Because the MCO does not update nodes within paused MCPs, you can pause the MCPs containing nodes that you do not want to update before initiating a cluster update. Using one or more custom MCPs can give you more control over the sequence in which you update your worker nodes. For example, after you update the nodes in the first MCP, you can verify the application compatibility and then update the rest of the nodes gradually to the new version. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Note To ensure the stability of the control plane, creating a custom MCP from the control plane nodes is not supported. The Machine Config Operator (MCO) ignores any custom MCP created for the control plane nodes. Considerations when using custom machine config pools Give careful consideration to the number of MCPs that you create and the number of nodes in each MCP, based on your workload deployment topology. For example, if you must fit updates into specific maintenance windows, you must know how many nodes OpenShift Container Platform can update within a given window. This number is dependent on your unique cluster and workload characteristics. You must also consider how much extra capacity is available in your cluster to determine the number of custom MCPs and the amount of nodes within each MCP. In a case where your applications fail to work as expected on newly updated nodes, you can cordon and drain those nodes in the pool, which moves the application pods to other nodes. However, you must determine whether the available nodes in the remaining MCPs can provide sufficient quality-of-service (QoS) for your applications. Note You can use this update process with all documented OpenShift Container Platform update processes. However, the process does not work with Red Hat Enterprise Linux (RHEL) machines, which are updated using Ansible playbooks. 3.4.3. About performing a canary rollout update The following steps outline the high-level workflow of the canary rollout update process: Create custom machine config pools (MCP) based on the worker pool. Note You can change the maxUnavailable setting in an MCP to specify the percentage or the number of machines that can be updating at any given time. The default is 1 . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Add a node selector to the custom MCPs. For each node that you do not want to update simultaneously with the rest of the cluster, add a matching label to the nodes. This label associates the node to the MCP. Important Do not remove the default worker label from the nodes. The nodes must have a role label to function properly in the cluster. Pause the MCPs you do not want to update as part of the update process. Perform the cluster update. The update process updates the MCPs that are not paused, including the control plane nodes. Test your applications on the updated nodes to ensure they are working as expected. Unpause one of the remaining MCPs, wait for the nodes in that pool to finish updating, and test the applications on those nodes. Repeat this process until all worker nodes are updated. Optional: Remove the custom label from updated nodes and delete the custom MCPs. 3.4.4. Creating machine config pools to perform a canary rollout update To perform a canary rollout update, you must first create one or more custom machine config pools (MCP). Procedure List the worker nodes in your cluster by running the following command: USD oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{"\n"}{end}' nodes Example output ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm For each node that you want to delay, add a custom label to the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>= For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary= Example output node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled Create the new MCP: Create an MCP YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: "" 3 1 Specify a name for the MCP. 2 Specify the worker and custom MCP name. 3 Specify the custom label you added to the nodes that you want in this pool. Create the MachineConfigPool object by running the following command: USD oc create -f <file_name> Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created View the list of MCPs in the cluster and their current state by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m The new machine config pool, workerpool-canary , is created and the number of nodes to which you added the custom label are shown in the machine counts. The worker MCP machine counts are reduced by the same number. It can take several minutes to update the machine counts. In this example, one node was moved from the worker MCP to the workerpool-canary MCP. 3.4.5. Managing machine configuration inheritance for a worker pool canary You can configure a machine config pool (MCP) canary to inherit any MachineConfig assigned to an existing MCP. This configuration is useful when you want to use an MCP canary to test as you update nodes one at a time for an existing MCP. Prerequisites You have created one or more MCPs. Procedure Create a secondary MCP as described in the following two steps: Save the following configuration file as machineConfigPool.yaml . Example machineConfigPool YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: "" # ... Create the new machine config pool by running the following command: USD oc create -f machineConfigPool.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf created Add some machines to the secondary MCP. The following example labels the worker nodes worker-a , worker-b , and worker-c to the MCP worker-perf : USD oc label node worker-a node-role.kubernetes.io/worker-perf='' USD oc label node worker-b node-role.kubernetes.io/worker-perf='' USD oc label node worker-c node-role.kubernetes.io/worker-perf='' Create a new MachineConfig for the MCP worker-perf as described in the following two steps: Save the following MachineConfig example as a file called new-machineconfig.yaml : Example MachineConfig YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M # ... Apply the MachineConfig by running the following command: USD oc create -f new-machineconfig.yaml Create the new canary MCP and add machines from the MCP you created in the steps. The following example creates an MCP called worker-perf-canary , and adds machines from the worker-perf MCP that you previosuly created. Label the canary worker node worker-a by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary='' Remove the canary worker node worker-a from the original MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf- Save the following file as machineConfigPool-Canary.yaml . Example machineConfigPool-Canary.yaml file apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: "" 1 Optional value. This example includes worker-perf-canary as an additional value. You can use a value in this way to configure members of an additional MachineConfig . Create the new worker-perf-canary by running the following command: USD oc create -f machineConfigPool-Canary.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created Check if the MachineConfig is inherited in worker-perf-canary . Verify that no MCP is degraded by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m Verify that the machines are inherited from worker-perf into worker-perf-canary . USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ... worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5 Verify that kdump service is enabled on worker-a by running the following command: USD systemctl status kdump.service Example output NAME STATUS ROLES AGE VERSION ... kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS) Verify that the MCP has updated the crashkernel by running the following command: USD cat /proc/cmdline The output should include the updated crashekernel value, for example: Example output crashkernel=512M Optional: If you are satisfied with the upgrade, you can return worker-a to worker-perf . Return worker-a to worker-perf by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf='' Remove worker-a from the canary MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary- 3.4.6. Pausing the machine config pools After you create your custom machine config pools (MCPs), you then pause those MCPs. Pausing an MCP prevents the Machine Config Operator (MCO) from updating the nodes associated with that MCP. Procedure Patch the MCP that you want paused by running the following command: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":true}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":true}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched 3.4.7. Performing the cluster update After the machine config pools (MCP) enter a ready state, you can perform the cluster update. See one of the following update methods, as appropriate for your cluster: Updating a cluster using the web console Updating a cluster using the CLI After the cluster update is complete, you can begin to unpause the MCPs one at a time. 3.4.8. Unpausing the machine config pools After the OpenShift Container Platform update is complete, unpause your custom machine config pools (MCP) one at a time. Unpausing an MCP allows the Machine Config Operator (MCO) to update the nodes associated with that MCP. Procedure Patch the MCP that you want to unpause: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":false}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":false}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched Optional: Check the progress of the update by using one of the following options: Check the progress from the web console by clicking Administration Cluster settings . Check the progress by running the following command: USD oc get machineconfigpools Test your applications on the updated nodes to ensure that they are working as expected. Repeat this process for any other paused MCPs, one at a time. Note In case of a failure, such as your applications not working on the updated nodes, you can cordon and drain the nodes in the pool, which moves the application pods to other nodes to help maintain the quality-of-service for the applications. This first MCP should be no larger than the excess capacity. 3.4.9. Moving a node to the original machine config pool After you update and verify applications on nodes in a custom machine config pool (MCP), move the nodes back to their original MCP by removing the custom label that you added to the nodes. Important A node must have a role to be properly functioning in the cluster. Procedure For each node in a custom MCP, remove the custom label from the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>- For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary- Example output node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled The Machine Config Operator moves the nodes back to the original MCP and reconciles the node to the MCP configuration. To ensure that node has been removed from the custom MCP, view the list of MCPs in the cluster and their current state by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m When the node is removed from the custom MCP and moved back to the original MCP, it can take several minutes to update the machine counts. In this example, one node was moved from the removed workerpool-canary MCP to the worker MCP. Optional: Delete the custom MCP by running the following command: USD oc delete mcp <mcp_name> 3.5. Updating a cluster that includes RHEL compute machines You can perform minor version and patch updates on an OpenShift Container Platform cluster. If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must take additional steps to update those machines. Important The use of RHEL compute machines on OpenShift Container Platform clusters has been deprecated and will be removed in a future release. 3.5.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Additional resources Support policy for unmanaged Operators 3.5.2. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.16 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the update playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. Additional resources Updating installed Operators 3.5.3. Optional: Adding hooks to perform Ansible tasks on RHEL machines You can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift Container Platform update. 3.5.3.1. About Ansible hooks for updates When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks . Hooks allow you to provide files that define tasks to run before or after specific update tasks. You can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform cluster. Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run multiple times and provide the same results. Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible variables, but it is possible that the variables will be modified or removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the update again. 3.5.3.2. Configuring the Ansible inventory file to use hooks You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, in the hosts inventory file under the all:vars section. Prerequisites You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines. Procedure After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example: --- # Trivial example forcing an operator to acknowledge the start of an upgrade # file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - name: require the user agree to start an upgrade pause: prompt: "Press Enter to start the compute machine update" Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the [all:vars] section, as shown: Example hook definitions in an inventory file To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions. 3.5.3.3. Available hooks for RHEL compute machines You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in your OpenShift Container Platform cluster. Hook name Description openshift_node_pre_cordon_hook Runs before each node is cordoned. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_upgrade_hook Runs after each node is cordoned but before it is updated. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_uncordon_hook Runs after each node is updated but before it is uncordoned. This hook runs against each node in serial. If a task must run against a different host, they task must use delegate_to or local_action . openshift_node_post_upgrade_hook Runs after each node uncordoned. It is the last node update action. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . 3.5.4. Updating RHEL compute machines in your cluster After you update your cluster, you must update the Red Hat Enterprise Linux (RHEL) compute machines in your cluster. Important Red Hat Enterprise Linux (RHEL) versions 8.6 and later are supported for RHEL compute machines. You can also update your compute machines to another minor version of OpenShift Container Platform if you are using RHEL as the operating system. You do not need to exclude any RPM packages from RHEL when performing a minor version update. Important You cannot update RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. Prerequisites You updated your cluster. Important Because the RHEL machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the RHEL worker machines in it. You have access to the local machine that you used to add the RHEL compute machines to your cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines and the upgrade playbook. For updates to a minor version, the RPM repository is using the same version of OpenShift Container Platform that is running on your cluster. Procedure Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note By default, the base OS RHEL with "Minimal" installation option enables firewalld service. Having the firewalld service enabled on your host prevents you from accessing OpenShift Container Platform logs on the worker. Do not enable firewalld later if you wish to continue accessing OpenShift Container Platform logs on the worker. Enable the repositories that are required for OpenShift Container Platform 4.16: On the machine that you run the Ansible playbooks, update the required repositories: # subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.16-for-rhel-8-x86_64-rpms Important As of OpenShift Container Platform 4.11, the Ansible playbooks are provided only for RHEL 8. If a RHEL 7 system was used as a host for the OpenShift Container Platform 4.10 Ansible playbooks, you must either update the Ansible host to RHEL 8, or create a new Ansible host on a RHEL 8 system and copy over the inventories from the old Ansible host. On the machine that you run the Ansible playbooks, update the Ansible package: # yum swap ansible ansible-core On the machine that you run the Ansible playbooks, update the required packages, including openshift-ansible : # yum update openshift-ansible openshift-clients On each RHEL compute node, update the required repositories: # subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.16-for-rhel-8-x86_64-rpms Update a RHEL worker machine: Review your Ansible inventory file at /<path>/inventory/hosts and update its contents so that the RHEL 8 machines are listed in the [workers] section, as shown in the following example: Change to the openshift-ansible directory: USD cd /usr/share/ansible/openshift-ansible Run the upgrade playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. Note The upgrade playbook only updates the OpenShift Container Platform packages. It does not update the operating system packages. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: # oc get node Example output NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.29.4 mycluster-control-plane-1 Ready master 145m v1.29.4 mycluster-control-plane-2 Ready master 145m v1.29.4 mycluster-rhel8-0 Ready worker 98m v1.29.4 mycluster-rhel8-1 Ready worker 98m v1.29.4 mycluster-rhel8-2 Ready worker 98m v1.29.4 mycluster-rhel8-3 Ready worker 98m v1.29.4 Optional: Update the operating system packages that were not updated by the upgrade playbook. To update packages that are not on 4.16, use the following command: # yum update Note You do not need to exclude RPM packages if you are using the same RPM repository that you used when you installed 4.16. 3.6. Updating a cluster in a disconnected environment 3.6.1. About cluster updates in a disconnected environment A disconnected environment is one in which your cluster nodes cannot access the internet. For this reason, you must populate a registry with the installation images. If your registry host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment and then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror registry's host, you can directly push the release images to the local registry. A single container image registry is sufficient to host mirrored images for several clusters in the disconnected network. 3.6.1.1. Mirroring OpenShift Container Platform images To update your cluster in a disconnected environment, your cluster environment must have access to a mirror registry that has the necessary images and resources for your targeted update. The following page has instructions for mirroring images onto a repository in your disconnected cluster: Mirroring OpenShift Container Platform images 3.6.1.2. Performing a cluster update in a disconnected environment You can use one of the following procedures to update a disconnected OpenShift Container Platform cluster: Updating a cluster in a disconnected environment using the OpenShift Update Service Updating a cluster in a disconnected environment without the OpenShift Update Service 3.6.1.3. Uninstalling the OpenShift Update Service from a cluster You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS) from your cluster: Uninstalling the OpenShift Update Service from a cluster 3.6.2. Mirroring OpenShift Container Platform images You must mirror container images onto a mirror registry before you can update a cluster in a disconnected environment. You can also use this procedure in connected environments to ensure your clusters run only approved container images that have satisfied your organizational controls for external content. Note Your mirror registry must be running at all times while the cluster is running. The following steps outline the high-level workflow on how to mirror images to a mirror registry: Install the OpenShift CLI ( oc ) on all devices being used to retrieve and push release images. Download the registry pull secret and add it to your cluster. If you use the oc-mirror OpenShift CLI ( oc ) plugin : Install the oc-mirror plugin on all devices being used to retrieve and push release images. Create an image set configuration file for the plugin to use when determining which release images to mirror. You can edit this configuration file later to change which release images that the plugin mirrors. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps as needed to update your mirror registry. If you use the oc adm release mirror command : Set environment variables that correspond to your environment and the release images you want to mirror. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Repeat these steps as needed to update your mirror registry. Compared to using the oc adm release mirror command, the oc-mirror plugin has the following advantages: It can mirror content other than container images. After mirroring images for the first time, it is easier to update images in the registry. The oc-mirror plugin provides an automated way to mirror the release payload from Quay, and also builds the latest graph data image for the OpenShift Update Service running in the disconnected environment. 3.6.2.1. Mirroring resources using the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity to download the required images from the official Red Hat registries. See Mirroring images for a disconnected installation using the oc-mirror plugin for additional details. 3.6.2.2. Mirroring images using the oc adm release mirror command You can use the oc adm release mirror command to mirror images to your mirror registry. 3.6.2.2.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not have an existing solution for a container image registry, the mirror registry for Red Hat OpenShift is included in OpenShift Container Platform subscriptions. The mirror registry for Red Hat OpenShift is a small-scale container registry that you can use to mirror OpenShift Container Platform container images in disconnected installations and updates. 3.6.2.2.2. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.6.2.2.2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . If you are updating a cluster in a disconnected environment, install the oc version that you plan to update to. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> Additional resources Installing and using CLI plugins 3.6.2.2.2.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Optional: If using the oc-mirror plugin, save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.6.2.2.3. Mirroring images to a mirror registry Important To avoid excessive memory usage by the OpenShift Update Service application, you must mirror release images to a separate repository as described in the following procedure. Prerequisites You configured a mirror registry to use in your disconnected environment and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Use the Red Hat OpenShift Container Platform Update Graph visualizer and update planner to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. Set the required environment variables: Export the release version: USD export OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to which you want to update, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . If you are using the OpenShift Update Service, export an additional local repository name to contain the release images: USD LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>' For <local_release_images_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4-release-images . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Mirror the version images to the mirror registry. If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Mirror the images and configuration manifests to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Note This command also generates and saves the mirrored release image signature config map onto the removable media. Take the media to the disconnected environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Use oc command-line interface (CLI) to log in to the cluster that you are updating. Apply the mirrored release image signature config map to the connected cluster: USD oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1 1 For <image_signature_file> , specify the path and name of the file, for example, signature-sha256-81154f5c03294534.yaml . If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} If the local container registry and the cluster are connected to the mirror host, take the following actions: Directly push the release images to the local registry and apply the config map to the cluster by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature Note If you include the --apply-release-image-signature option, do not create the config map for image signature verification. If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} 3.6.3. Updating a cluster in a disconnected environment using the OpenShift Update Service To get an update experience similar to connected clusters, you can use the following procedures to install and configure the OpenShift Update Service (OSUS) in a disconnected environment. The following steps outline the high-level workflow on how to update a cluster in a disconnected environment using OSUS: Configure access to a secured registry. Update the global cluster pull secret to access your mirror registry. Install the OSUS Operator. Create a graph data container image for the OpenShift Update Service. Install the OSUS application and configure your clusters to use the OpenShift Update Service in your environment. Perform a supported update procedure from the documentation as you would with a connected cluster. 3.6.3.1. Using the OpenShift Update Service in a disconnected environment The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform clusters. Red Hat publicly hosts the OpenShift Update Service, and clusters in a connected environment can connect to the service through public APIs to retrieve update recommendations. However, clusters in a disconnected environment cannot access these public APIs to retrieve update information. To have a similar update experience in a disconnected environment, you can install and configure the OpenShift Update Service so that it is available within the disconnected environment. A single OSUS instance is capable of serving recommendations to thousands of clusters. OSUS can be scaled horizontally to cater to more clusters by changing the replica value. So for most disconnected use cases, one OSUS instance is enough. For example, Red Hat hosts just one OSUS instance for the entire fleet of connected clusters. If you want to keep update recommendations separate in different environments, you can run one OSUS instance for each environment. For example, in a case where you have separate test and stage environments, you might not want a cluster in a stage environment to receive update recommendations to version A if that version has not been tested in the test environment yet. The following sections describe how to install an OSUS instance and configure it to provide update recommendations to a cluster. Additional resources About the OpenShift Update Service Understanding update channels and releases 3.6.3.2. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a container image registry in your environment with the container images for your update, as described in Mirroring OpenShift Container Platform images . 3.6.3.3. Configuring access to a secured registry for the OpenShift Update Service If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in Configuring additional trust stores for image registry access along with following changes for the update service. The OpenShift Update Service Operator needs the config map key name updateservice-registry in the registry CA cert. Image registry CA config map example for the update service apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 The OpenShift Update Service Operator requires the config map key name updateservice-registry in the registry CA cert. 2 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . 3.6.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 3.6.3.5. Installing the OpenShift Update Service Operator To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. Note For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see Using Operator Lifecycle Manager on restricted networks . 3.6.3.5.1. Installing the OpenShift Update Service Operator by using the web console You can use the web console to install the OpenShift Update Service Operator. Procedure In the web console, click Operators OperatorHub . Note Enter Update Service into the Filter by keyword... field to find the Operator faster. Choose OpenShift Update Service from the list of available Operators, and click Install . Select an Update channel . Select a Version . Select A specific namespace on the cluster under Installation Mode . Select a namespace for Installed Namespace or accept the recommended namespace openshift-update-service . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a cluster administrator to approve the Operator update. Click Install . Go to Operators Installed Operators and verify that the OpenShift Update Service Operator is installed. Ensure that OpenShift Update Service is listed in the correct namespace with a Status of Succeeded . 3.6.3.5.2. Installing the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Update Service Operator. Procedure Create a namespace for the OpenShift Update Service Operator: Create a Namespace object YAML file, for example, update-service-namespace.yaml , for the OpenShift Update Service Operator: apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 1 Set the openshift.io/cluster-monitoring label to enable Operator-recommended cluster monitoring on this namespace. Create the namespace: USD oc create -f <filename>.yaml For example: USD oc create -f update-service-namespace.yaml Install the OpenShift Update Service Operator by creating the following objects: Create an OperatorGroup object YAML file, for example, update-service-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service Create an OperatorGroup object: USD oc -n openshift-update-service create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-operator-group.yaml Create a Subscription object YAML file, for example, update-service-subscription.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: "Automatic" source: "redhat-operators" 1 sourceNamespace: "openshift-marketplace" name: "cincinnati-operator" 1 Specify the name of the catalog source that provides the Operator. For clusters that do not use a custom Operator Lifecycle Manager (OLM), specify redhat-operators . If your OpenShift Container Platform cluster is installed in a disconnected environment, specify the name of the CatalogSource object created when you configured Operator Lifecycle Manager (OLM). Create the Subscription object: USD oc create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-subscription.yaml The OpenShift Update Service Operator is installed to the openshift-update-service namespace and targets the openshift-update-service namespace. Verify the Operator installation: USD oc -n openshift-update-service get clusterserviceversions Example output NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded ... If the OpenShift Update Service Operator is listed, the installation was successful. The version number might be different than shown. Additional resources Installing Operators in your namespace . 3.6.3.6. Creating the OpenShift Update Service graph data container image The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. Note The oc-mirror OpenShift CLI ( oc ) plugin creates this graph data container image in addition to mirroring release images. If you used the oc-mirror plugin to mirror your release images, you can skip this procedure. Procedure Create a Dockerfile, for example, ./Dockerfile , containing the following: FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"] Use the docker file created in the above step to build a graph data container image, for example, registry.example.com/openshift/graph-data:latest : USD podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest Push the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service, for example, registry.example.com/openshift/graph-data:latest : USD podman push registry.example.com/openshift/graph-data:latest Note To push a graph data image to a registry in a disconnected environment, copy the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service. Run oc image mirror --help for available options. 3.6.3.7. Creating an OpenShift Update Service application You can create an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.3.7.1. Creating an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to create an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. Click Create UpdateService . Enter a name in the Name field, for example, service . Enter the local pullspec in the Graph Data Image field to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest . In the Releases field, enter the registry and repository created to contain the release images in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images . Enter 2 in the Replicas field. Click Create to create the OpenShift Update Service application. Verify the OpenShift Update Service application: From the UpdateServices list in the Update Service tab, click the Update Service application just created. Click the Resources tab. Verify each application resource has a status of Created . 3.6.3.7.2. Creating an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to create an OpenShift Update Service application. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure Configure the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service The namespace must match the targetNamespaces value from the operator group. Configure the name of the OpenShift Update Service application, for example, service : USD NAME=service Configure the registry and repository for the release images as configured in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images : USD RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images Set the local pullspec for the graph data image to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest : USD GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest Create an OpenShift Update Service application object: USD oc -n "USD{NAMESPACE}" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF Verify the OpenShift Update Service application: Use the following command to obtain a policy engine route: USD while sleep 1; do POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")"; SCHEME="USD{POLICY_ENGINE_GRAPH_URI%%:*}"; if test "USD{SCHEME}" = http -o "USD{SCHEME}" = https; then break; fi; done You might need to poll until the command succeeds. Retrieve a graph from the policy engine. Be sure to specify a valid version for channel . For example, if running in OpenShift Container Platform 4.16, use stable-4.16 : USD while sleep 10; do HTTP_CODE="USD(curl --header Accept:application/json --output /dev/stderr --write-out "%{http_code}" "USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6")"; if test "USD{HTTP_CODE}" -eq 200; then break; fi; echo "USD{HTTP_CODE}"; done This polls until the graph request succeeds; however, the resulting graph might be empty depending on which release images you have mirrored. Note The policy engine route name must not be more than 63 characters based on RFC-1123. If you see ReconcileCompleted status as false with the reason CreateRouteFailed caused by host must conform to DNS 1123 naming convention and must be no more than 63 characters , try creating the Update Service with a shorter name. 3.6.3.8. Configuring the Cluster Version Operator (CVO) After the OpenShift Update Service Operator has been installed and the OpenShift Update Service application has been created, the Cluster Version Operator (CVO) can be updated to pull graph data from the OpenShift Update Service installed in your environment. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. The OpenShift Update Service application has been created. Procedure Set the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service Set the name of the OpenShift Update Service application, for example, service : USD NAME=service Obtain the policy engine route: USD POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")" Set the patch for the pull graph data: USD PATCH="{\"spec\":{\"upstream\":\"USD{POLICY_ENGINE_GRAPH_URI}\"}}" Patch the CVO to use the OpenShift Update Service in your environment: USD oc patch clusterversion version -p USDPATCH --type merge Note See Configuring the cluster-wide proxy to configure the CA to trust the update server. 3.6.3.9. steps Before updating your cluster, confirm that the following conditions are met: The Cluster Version Operator (CVO) is configured to use your installed OpenShift Update Service application. The release image signature config map for the new release is applied to your cluster. Note The Cluster Version Operator (CVO) uses release image signatures to ensure that release images have not been modified, by verifying that the release image signatures match the expected result. The current release and update target release images are mirrored to a registry in the disconnected environment. A recent graph data container image has been mirrored to your registry. A recent version of the OpenShift Update Service Operator is installed. Note If you have not recently installed or updated the OpenShift Update Service Operator, there might be a more recent version available. See Using Operator Lifecycle Manager on restricted networks for more information about how to update your OLM catalog in a disconnected environment. After you configure your cluster to use the installed OpenShift Update Service and local mirror registry, you can use any of the following update methods: Updating a cluster using the web console Updating a cluster using the CLI Performing a Control Plane Only update Performing a canary rollout update Updating a cluster that includes RHEL compute machines 3.6.4. Updating a cluster in a disconnected environment without the OpenShift Update Service Use the following procedures to update a cluster in a disconnected environment without access to the OpenShift Update Service. 3.6.4.1. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a local container image registry with the container images for your update, as described in Mirroring OpenShift Container Platform images . You must have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . You must have a recent etcd backup in case your update fails and you must restore your cluster to a state . You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Note If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. 3.6.4.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.6.4.3. Retrieving a release image digest In order to update a cluster in a disconnected environment using the oc adm upgrade command with the --to-image option, you must reference the sha256 digest that corresponds to your targeted release image. Procedure Run the following command on a device that is connected to the internet: USD oc adm release info -o 'jsonpath={.digest}{"\n"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE} For {OCP_RELEASE_VERSION} , specify the version of OpenShift Container Platform to which you want to update, such as 4.10.16 . For {ARCHITECTURE} , specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Example output sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d Copy the sha256 digest for use when updating your cluster. 3.6.4.4. Updating the disconnected cluster Update the disconnected cluster to the OpenShift Container Platform version that you downloaded the release images for. Note If you have a local OpenShift Update Service, you can update by using the connected web console or CLI instructions instead of this procedure. Prerequisites You mirrored the images for the new release to your registry. You applied the release image signature ConfigMap for the new release to your cluster. Note The release image signature config map allows the Cluster Version Operator (CVO) to ensure the integrity of release images by verifying that the actual image signatures match the expected signatures. You obtained the sha256 digest for your targeted release image. You installed the OpenShift CLI ( oc ). You paused all MachineHealthCheck resources. Procedure Update the cluster: USD oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest> Where: <defined_registry> Specifies the name of the mirror registry you mirrored your images to. <defined_repository> Specifies the name of the image repository you want to use on the mirror registry. <digest> Specifies the sha256 digest for the targeted release image, for example, sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92 . Note See "Mirroring OpenShift Container Platform images" to review how your mirror registry and repository names are defined. If you used an ImageContentSourcePolicy or ImageDigestMirrorSet , you can use the canonical registry and repository names instead of the names you defined. The canonical registry name is quay.io and the canonical repository name is openshift-release-dev/ocp-release . You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy , ImageDigestMirrorSet , or ImageTagMirrorSet object. You cannot add a pull secret to a project. Additional resources Mirroring OpenShift Container Platform images 3.6.4.5. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a data center that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. The ICSP CR always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. 3.6.4.5.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy --all \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use the secondary mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in an image pull specification. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.29.4 ip-10-0-138-148.ec2.internal Ready master 11m v1.29.4 ip-10-0-139-122.ec2.internal Ready master 11m v1.29.4 ip-10-0-147-35.ec2.internal Ready worker 7m v1.29.4 ip-10-0-153-12.ec2.internal Ready worker 7m v1.29.4 ip-10-0-154-10.ec2.internal Ready master 11m v1.29.4 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/redhat" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 3.6.4.5.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. 3.6.4.6. Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots You can scope the mirrored image catalog at the repository level or the wider registry level. A widely scoped ImageContentSourcePolicy resource reduces the number of times the nodes need to reboot in response to changes to the resource. To widen the scope of the mirror image catalog in the ImageContentSourcePolicy resource, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Configure a mirrored image catalog for use in your disconnected cluster. Procedure Run the following command, specifying values for <local_registry> , <pull_spec> , and <pull_secret_file> : USD oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry where: <local_registry> is the local registry you have configured for your disconnected cluster, for example, local.registry:5000 . <pull_spec> is the pull specification as configured in your disconnected registry, for example, redhat/redhat-operator-index:v4.16 <pull_secret_file> is the registry.redhat.io pull secret in .json file format. You can download the pull secret from Red Hat OpenShift Cluster Manager . The oc adm catalog mirror command creates a /redhat-operator-index-manifests directory and generates imageContentSourcePolicy.yaml , catalogSource.yaml , and mapping.txt files. Apply the new ImageContentSourcePolicy resource to the cluster: USD oc apply -f imageContentSourcePolicy.yaml Verification Verify that oc apply successfully applied the change to ImageContentSourcePolicy : USD oc get ImageContentSourcePolicy -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.openshift.io/v1alpha1","kind":"ImageContentSourcePolicy","metadata":{"annotations":{},"name":"redhat-operator-index"},"spec":{"repositoryDigestMirrors":[{"mirrors":["local.registry:5000"],"source":"registry.redhat.io"}]}} ... After you update the ImageContentSourcePolicy resource, OpenShift Container Platform deploys the new settings to each node and the cluster starts using the mirrored repository for requests to the source repository. 3.6.4.7. Additional resources Using Operator Lifecycle Manager on restricted networks Machine Config Overview 3.6.5. Uninstalling the OpenShift Update Service from a cluster To remove a local copy of the OpenShift Update Service (OSUS) from your cluster, you must first delete the OSUS application and then uninstall the OSUS Operator. 3.6.5.1. Deleting an OpenShift Update Service application You can delete an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.5.1.1. Deleting an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to delete an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. From the list of installed OpenShift Update Service applications, select the application to be deleted and then click Delete UpdateService . From the Delete UpdateService? confirmation dialog, click Delete to confirm the deletion. 3.6.5.1.2. Deleting an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to delete an OpenShift Update Service application. Procedure Get the OpenShift Update Service application name using the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc get updateservice -n openshift-update-service Example output NAME AGE service 6s Delete the OpenShift Update Service application using the NAME value from the step and the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc delete updateservice service -n openshift-update-service Example output updateservice.updateservice.operator.openshift.io "service" deleted 3.6.5.2. Uninstalling the OpenShift Update Service Operator You can uninstall the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. 3.6.5.2.1. Uninstalling the OpenShift Update Service Operator by using the web console You can use the OpenShift Container Platform web console to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure In the web console, click Operators Installed Operators . Select OpenShift Update Service from the list of installed Operators and click Uninstall Operator . From the Uninstall Operator? confirmation dialog, click Uninstall to confirm the uninstallation. 3.6.5.2.2. Uninstalling the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure Change to the project containing the OpenShift Update Service Operator, for example, openshift-update-service : USD oc project openshift-update-service Example output Now using project "openshift-update-service" on server "https://example.com:6443". Get the name of the OpenShift Update Service Operator operator group: USD oc get operatorgroup Example output NAME AGE openshift-update-service-fprx2 4m41s Delete the operator group, for example, openshift-update-service-fprx2 : USD oc delete operatorgroup openshift-update-service-fprx2 Example output operatorgroup.operators.coreos.com "openshift-update-service-fprx2" deleted Get the name of the OpenShift Update Service Operator subscription: USD oc get subscription Example output NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1 Using the Name value from the step, check the current version of the subscribed OpenShift Update Service Operator in the currentCSV field: USD oc get subscription update-service-operator -o yaml | grep " currentCSV" Example output currentCSV: update-service-operator.v0.0.1 Delete the subscription, for example, update-service-operator : USD oc delete subscription update-service-operator Example output subscription.operators.coreos.com "update-service-operator" deleted Delete the CSV for the OpenShift Update Service Operator using the currentCSV value from the step: USD oc delete clusterserviceversion update-service-operator.v0.0.1 Example output clusterserviceversion.operators.coreos.com "update-service-operator.v0.0.1" deleted 3.7. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Version 4.16 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. Before upgrading OpenShift 4.12 to OpenShift 4.13, you must update vSphere to v7.0.2 or later ; otherwise, the OpenShift 4.12 cluster is marked un-upgradeable . 3.7.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. Important As of OpenShift Container Platform 4.13, VMware virtual hardware version 13 is no longer supported. You need to update to VMware version 15 or later for supporting functionality. 3.7.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.29.4 control-plane-node-1 Ready master 75m v1.29.4 control-plane-node-2 Ready master 75m v1.29.4 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 3.7.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be updated serially. Note Multiple compute nodes can be updated in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.29.4 compute-node-1 Ready worker 30m v1.29.4 compute-node-2 Ready worker 30m v1.29.4 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 3.7.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the virtual machine (VM) in the VMware vSphere client. Complete the steps outlined in Upgrade the Compatibility of a Virtual Machine Manually (VMware vSphere documentation). Convert the VM in the vSphere client to a template by right-clicking on the VM and then selecting Template Convert to Template . Important The steps for converting a VM to a template might change in future vSphere documentation versions. Additional resources Understanding how to evacuate pods on nodes 3.7.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an update prior to performing an update of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform update. 3.8. Migrating to a cluster with multi-architecture compute machines You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster. For information about configuring your multi-architecture compute machines, see "Configuring multi-architecture compute machines on an OpenShift Container Platform cluster". Before migrating your single-architecture cluster to a cluster with multi-architecture compute machines, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . Important Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture update payload. 3.8.1. Migrating to a cluster with multi-architecture compute machines using the CLI Prerequisites You have access to the cluster as a user with the cluster-admin role. Your OpenShift Container Platform version is up to date to at least version 4.13.0. For more information on how to update your cluster version, see Updating a cluster using the web console or Updating a cluster using the CLI . You have installed the OpenShift CLI ( oc ) that matches the version for your current cluster. Your oc client is updated to at least verion 4.13.0. Your OpenShift Container Platform cluster is installed on AWS, Azure, GCP, bare metal or IBM P/Z platforms. For more information on selecting a supported platform for your cluster installation, see Selecting a cluster installation type . Procedure Verify that the RetrievedUpdates condition is True in the Cluster Version Operator (CVO) by running the following command: USD oc get clusterversion/version -o=jsonpath="{.status.conditions[?(.type=='RetrievedUpdates')].status}" If the RetrievedUpates condition is False , you can find supplemental information regarding the failure by using the following command: USD oc adm upgrade For more information about cluster version condition types, see Understanding cluster version condition types . If the condition RetrievedUpdates is False , change the channel to stable-<4.y> or fast-<4.y> with the following command: USD oc adm upgrade channel <channel> After setting the channel, verify if RetrievedUpdates is True . For more information about channels, see Understanding update channels and releases . Migrate to the multi-architecture payload with following command: USD oc adm upgrade --to-multi-arch Verification You can monitor the migration by running the following command: USD oc adm upgrade Important Machine launches may fail as the cluster settles into the new state. To notice and recover when machines fail to launch, we recommend deploying machine health checks. For more information about machine health checks and how to deploy them, see About machine health checks . The migrations must be complete and all the cluster operators must be stable before you can add compute machine sets with different architectures to your cluster. Additional resources Configuring multi-architecture compute machines on an OpenShift Container Platform cluster Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . Updating a cluster using the web console Updating a cluster using the CLI Understanding cluster version condition types Understanding update channels and releases Selecting a cluster installation type About machine health checks 3.9. Updating hosted control planes On hosted control planes for OpenShift Container Platform, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates. 3.9.1. Updates for the hosted cluster The spec.release.image value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release.image value to the HostedControlPlane.spec.releaseImage value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 3.9.2. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 3.9.2.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 3.9.2.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 3.9.3. Configuring node pools for hosted control planes On hosted control planes, you can configure node pools by creating a MachineConfig object inside of a config map in the management cluster. Procedure To create a MachineConfig object inside of a config map in the management cluster, enter the following information: apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: USD{PATH} 1 1 Sets the path on the node where the MachineConfig object is stored. After you add the object to the config map, you can apply the config map to the node pool as follows: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 3.10. Updating the boot loader on RHCOS nodes using bootupd To update the boot loader on RHCOS nodes using bootupd , you must either run the bootupctl update command on RHCOS machines manually or provide a machine config with a systemd unit. Unlike grubby or other boot loader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. To configure kernel arguments, see Adding kernel arguments to nodes . Note You can use bootupd to update the boot loader to protect against the BootHole vulnerability. 3.10.1. Updating the boot loader manually You can manually inspect the status of the system and update the boot loader by using the bootupctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version OpenShift Container Platform clusters initially installed on version 4.4 and older require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 3.10.2. Updating the bootloader automatically via a machine config Another way to automatically update the boot loader with bootupd is to create a systemd service unit that will update the boot loader as needed on every boot. This unit will run the bootupctl update command during the boot process and will be installed on the nodes via a machine config. Note This configuration is not enabled by default as unexpected interruptions of the update operation may lead to unbootable nodes. If you enable this configuration, make sure to avoid interrupting nodes during the boot process while the bootloader update is in progress. The boot loader update operation generally completes quickly thus the risk is low. Create a Butane config file, 99-worker-bootupctl-update.bu , including the contents of the bootupctl-update.service systemd unit. Note See "Creating machine configs with Butane" for information about Butane. Example output variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 1 2 On control plane nodes, substitute master for worker in both of these locations. Use Butane to generate a MachineConfig object file, 99-worker-bootupctl-update.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-bootupctl-update.yaml | [
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.16",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.29.4 ip-10-0-170-223.ec2.internal Ready master 82m v1.29.4 ip-10-0-179-95.ec2.internal Ready worker 70m v1.29.4 ip-10-0-182-134.ec2.internal Ready worker 70m v1.29.4 ip-10-0-211-16.ec2.internal Ready master 82m v1.29.4 ip-10-0-250-100.ec2.internal Ready worker 69m v1.29.4",
"export OC_ENABLE_CMD_UPGRADE_STATUS=true",
"oc adm upgrade status",
"= Control Plane = Assessment: Progressing Target Version: 4.14.1 (from 4.14.0) Completion: 97% Duration: 54m Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Upgrade = = Worker Pool = Worker Pool: worker Assessment: Progressing Completion: 0% Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Pool = Worker Pool: infra Assessment: Progressing Completion: 0% Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Node NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m = Update Health = SINCE LEVEL IMPACT MESSAGE 14m4s Info None Update is proceeding well",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms --enable=rhocp-4.16-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms --enable=rhocp-4.16-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.29.4 mycluster-control-plane-1 Ready master 145m v1.29.4 mycluster-control-plane-2 Ready master 145m v1.29.4 mycluster-rhel8-0 Ready worker 98m v1.29.4 mycluster-rhel8-1 Ready worker 98m v1.29.4 mycluster-rhel8-2 Ready worker 98m v1.29.4 mycluster-rhel8-3 Ready worker 98m v1.29.4",
"yum update",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"export OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1",
"oc create -f <filename>.yaml",
"oc create -f update-service-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service",
"oc -n openshift-update-service create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"",
"oc create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-subscription.yaml",
"oc -n openshift-update-service get clusterserviceversions",
"NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded",
"FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]",
"podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest",
"podman push registry.example.com/openshift/graph-data:latest",
"NAMESPACE=openshift-update-service",
"NAME=service",
"RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images",
"GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest",
"oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF",
"while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done",
"while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done",
"NAMESPACE=openshift-update-service",
"NAME=service",
"POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"",
"PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"",
"oc patch clusterversion version -p USDPATCH --type merge",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}",
"sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d",
"oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>",
"skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.29.4 ip-10-0-138-148.ec2.internal Ready master 11m v1.29.4 ip-10-0-139-122.ec2.internal Ready master 11m v1.29.4 ip-10-0-147-35.ec2.internal Ready worker 7m v1.29.4 ip-10-0-153-12.ec2.internal Ready worker 7m v1.29.4 ip-10-0-154-10.ec2.internal Ready master 11m v1.29.4",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml",
"oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry",
"oc apply -f imageContentSourcePolicy.yaml",
"oc get ImageContentSourcePolicy -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}",
"oc get updateservice -n openshift-update-service",
"NAME AGE service 6s",
"oc delete updateservice service -n openshift-update-service",
"updateservice.updateservice.operator.openshift.io \"service\" deleted",
"oc project openshift-update-service",
"Now using project \"openshift-update-service\" on server \"https://example.com:6443\".",
"oc get operatorgroup",
"NAME AGE openshift-update-service-fprx2 4m41s",
"oc delete operatorgroup openshift-update-service-fprx2",
"operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted",
"oc get subscription",
"NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1",
"oc get subscription update-service-operator -o yaml | grep \" currentCSV\"",
"currentCSV: update-service-operator.v0.0.1",
"oc delete subscription update-service-operator",
"subscription.operators.coreos.com \"update-service-operator\" deleted",
"oc delete clusterserviceversion update-service-operator.v0.0.1",
"clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.29.4 control-plane-node-1 Ready master 75m v1.29.4 control-plane-node-2 Ready master 75m v1.29.4",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.29.4 compute-node-1 Ready worker 30m v1.29.4 compute-node-2 Ready worker 30m v1.29.4",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/updating_clusters/performing-a-cluster-update |
Chapter 9. Kernel | Chapter 9. Kernel Chelsio firmware updated to version 1.15.37.0 Chelsio firmware has been updated to version 1.15.37.0, which provides a number of bug fixes and enhancements over the version. The most notable bug fixes are: The iscsi tlv driver is no longer incorrectly sent to host. The firmware no longer terminates unexpectedly due to enabling or disabling the Data Center Bridging Capability Exchange (DCBX) protocol. The app priority value is now handled correctly in the firmware. (BZ#1349112) The bnxt_en driver updated to the latest upstream version The bnxt_en driver has been updated with several minor fixes and with support for BCM5731X, BCM5741X, and 57404 Network Partitioning (NPAR) devices. (BZ#1347825) The ahci driver supports Marwell 88SE9230 The ahci driver now supports Marvell 88SE9230 controller. (BZ#1392941) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_kernel |
Chapter 27. Kerberos PKINIT Authentication in IdM | Chapter 27. Kerberos PKINIT Authentication in IdM Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) is a preauthentication mechanism for Kerberos. As of Red Hat Enterprise Linux 7.4, the Identity Management (IdM) server includes a mechanism for Kerberos PKINIT authentication. The following sections give an overview of the PKINIT implementation in IdM and describe how to configure PKINIT in IdM. 27.1. Default PKINIT Status in Different IdM Versions The default PKINIT configuration on your IdM servers depends on the version of IdM in Red Hat Enterprise Linux (RHEL) and the certificate authority (CA) configuration. See Table 27.1, "Default PKINIT configuration in IdM versions" . Table 27.1. Default PKINIT configuration in IdM versions RHEL version CA configuration PKINIT configuration 7.3 and earlier Without a CA Local PKINIT: IdM only uses PKINIT for internal purposes on servers. 7.3 and earlier With an integrated CA IdM attempts to configure PKINIT by using the certificate signed by the integrated IdM CA. If the attempt fails, IdM configures local PKINIT only. 7.4 and later Without a CA No external PKINIT certificate provided to IdM Local PKINIT: IdM only uses PKINIT for internal purposes on servers. 7.4 and later Without a CA External PKINIT certificate provided to IdM IdM configures PKINIT by using the external Kerberos key distribution center (KDC) certificate and CA certificate. 7.4 and later With an integrated CA IdM configures PKINIT by using the certificate signed by the IdM CA. At domain level 0, PKINIT is disabled. The default behavior is local PKINIT: IdM only uses PKINIT for internal purposes on servers. See also Chapter 7, Displaying and Raising the Domain Level . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/pkinit |
Chapter 3. Robot account tokens | Chapter 3. Robot account tokens Robot account tokens are password-type credentials used to access a Red Hat Quay registry via normal Docker v2 endpoints; these are defined as tokens on the UI because the password itself is encrypted. Robot account tokens are persistent tokens designed for automation and continuous integration workflows. By default, Red Hat Quay's robot account tokens do not expire and do not require user interaction, which makes robot accounts ideal for non-interactive use cases. Robot account tokens are automatically generated at the time of a robot's creation and are non-user specific; that is, they are connected to the user and organization namespace where where they are created. for example, a robot named project_tools+<robot_name> is associated with the project_tools namespace. Robot account tokens provide access without needing a user's personal credentials. How the robot account is configured, for example, with one of READ , WRITE , or ADMIN permissions, ultimately defines the actions that the robot account can take. Because robot account tokens are persistent and do not expire by default, they are ideal for automated workflows that require consistent access to Red Hat Quay without manual renewal. Despite this, robot account tokens can be easily re-generated by using the the UI. They can also be regenerated by using the proper API endpoint via the CLI. To enhance the security of your Red Hat Quay deployment, administrators should regularly refresh robot account tokens. Additionally, with the keyless authentication with robot accounts feature, robot account tokens can be exchanged for external OIDC tokens and leveraged so that they only last one hour, enhancing the security of your registry. When a namespace gets deleted, or when the robot account is deleted itself, they are garbage collected when the collector is scheduled to run. The following section shows you how to use the API to re-generate a robot account token for organization robots and user robots. 3.1. Regenerating a robot account token by using the Red Hat Quay UI Use the following procedure to regenerate a robot account token by using the Red Hat Quay UI. Prerequisites You have logged into Red Hat Quay. Procedure Click the name of an Organization. In the navigation pane, click Robot accounts . Click the name of your robot account, for example, testorg3+test . Click Regenerate token in the popup box. 3.2. Regenerating a robot account token by using the Red Hat Quay API Use the following procedure to regenerate a robot account token using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to regenerate a robot account token for an organization using the POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" Example output {"name": "test-org+test", "created": "Fri, 10 May 2024 17:46:02 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} Enter the following command to regenerate a robot account token for the current user with the POST /api/v1/user/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" Example output {"name": "quayadmin+test", "created": "Fri, 10 May 2024 14:12:11 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} | [
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_api_guide/robot-account-tokens |
Chapter 2. Disaster scenarios in IdM | Chapter 2. Disaster scenarios in IdM Prepare and respond to various disaster scenarios in Identity Management (IdM) systems that affect servers, data, or entire infrastructures. Table 2.1. Disaster scenarios in IdM Disaster type Example causes How to prepare How to respond Server loss : The IdM deployment loses one or several servers. Hardware malfunction Preparing for server loss with replication Recovering a single server with replication Data loss : IdM data is unexpectedly modified on a server, and the change is propagated to other servers. A user accidentally deletes data A software bug modifies data Preparing for data loss with VM snapshots Preparing for data loss with IdM backups Recovering from data loss with VM snapshots Recovering from data loss with IdM backups Managing data loss Total infrastructure loss : All IdM servers or Certificate Authority (CA) replicas are lost with no VM snapshots or data backups available. Lack of off-site backups or redundancy prevents recovery after a failure or disaster. Preparing for data loss with VM snapshots This situation is a total loss. Warning A total loss scenario occurs when all Certificate Authority (CA) replicas or all IdM servers are lost, and no virtual machine (VM) snapshots or backups are available for recovery. Without CA replicas, the IdM environment cannot deploy additional replicas or rebuild itself, making recovery impossible. To avoid such scenarios, ensure backups are stored off-site, maintain multiple geographically redundant CA replicas, and connect each replica to at least two others. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/preparing_for_disaster_recovery_with_identity_management/disaster-scenarios-in-idm_preparing-for-disaster-recovery |
Chapter 5. Using OpenID Connect (OIDC) multitenancy | Chapter 5. Using OpenID Connect (OIDC) multitenancy This guide demonstrates how your OpenID Connect (OIDC) application can support multitenancy to serve multiple tenants from a single application. These tenants can be distinct realms or security domains within the same OIDC provider or even distinct OIDC providers. Each customer functions as a distinct tenant when serving multiple customers from the same application, such as in a SaaS environment. By enabling multitenancy support to your applications, you can support distinct authentication policies for each tenant, even authenticating against different OIDC providers, such as Keycloak and Google. To authorize a tenant by using Bearer Token Authorization, see the OpenID Connect (OIDC) Bearer token authentication guide. To authenticate and authorize a tenant by using the OIDC authorization code flow, read the OpenID Connect authorization code flow mechanism for protecting web applications guide. Also, see the OpenID Connect (OIDC) configuration properties reference guide. 5.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) jq tool 5.2. Architecture In this example, we build a very simple application that supports two resource methods: /{tenant} This resource returns information obtained from the ID token issued by the OIDC provider about the authenticated user and the current tenant. /{tenant}/bearer This resource returns information obtained from the Access Token issued by the OIDC provider about the authenticated user and the current tenant. 5.3. Solution For a thorough understanding, we recommend you build the application by following the upcoming step-by-step instructions. Alternatively, if you prefer to start with the completed example, clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15 , or download an archive . The solution is located in the security-openid-connect-multi-tenancy-quickstart directory . 5.4. Creating the Maven project First, we need a new project. Create a new project with the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-multi-tenancy-quickstart \ --extension='oidc,rest-jackson' \ --no-code cd security-openid-connect-multi-tenancy-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart \ -Dextensions='oidc,rest-jackson' \ -DnoCode cd security-openid-connect-multi-tenancy-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart" If you already have your Quarkus project configured, add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This adds the following to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 5.5. Writing the application Start by implementing the /{tenant} endpoint. As you can see from the source code below, it is just a regular Jakarta REST resource: package org.acme.quickstart.oidc; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; @Path("/{tenant}") public class HomeResource { /** * Injection point for the ID Token issued by the OIDC provider. */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the Access Token issued by the OIDC provider. */ @Inject JsonWebToken accessToken; /** * Returns the ID Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return ID Token info */ @GET @Produces("text/html") public String getIdTokenInfo() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>"); response.append("<h2>Welcome, ").append(this.idToken.getClaim("email").toString()).append("</h2>\n"); response.append("<h3>You are accessing the application within tenant <b>").append(idToken.getIssuer()).append(" boundaries</b></h3>"); return response.append("</body>").append("</html>").toString(); } /** * Returns the Access Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return Access Token info */ @GET @Produces("text/html") @Path("bearer") public String getAccessTokenInfo() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>"); response.append("<h2>Welcome, ").append(this.accessToken.getClaim("email").toString()).append("</h2>\n"); response.append("<h3>You are accessing the application within tenant <b>").append(accessToken.getIssuer()).append(" boundaries</b></h3>"); return response.append("</body>").append("</html>").toString(); } } To resolve the tenant from incoming requests and map it to a specific quarkus-oidc tenant configuration in application.properties , create an implementation for the io.quarkus.oidc.TenantConfigResolver interface, which can dynamically resolve tenant configurations: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.config.ConfigProvider; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); if (path.startsWith("/tenant-a")) { String keycloakUrl = ConfigProvider.getConfig().getValue("keycloak.url", String.class); OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId("tenant-a"); config.setAuthServerUrl(keycloakUrl + "/realms/tenant-a"); config.setClientId("multi-tenant-client"); config.getCredentials().setSecret("secret"); config.setApplicationType(ApplicationType.HYBRID); return Uni.createFrom().item(config); } else { // resolve to default tenant config return Uni.createFrom().nullItem(); } } } In the preceding implementation, tenants are resolved from the request path. If no tenant can be inferred, null is returned to indicate that the default tenant configuration should be used. The tenant-a application type is hybrid ; it can accept HTTP bearer tokens if provided. Otherwise, it initiates an authorization code flow when authentication is required. 5.6. Configuring the application # Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app # Tenant A configuration is created dynamically in CustomTenantConfigResolver # HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated The first configuration is the default tenant configuration that should be used when the tenant cannot be inferred from the request. Be aware that a %prod profile prefix is used with quarkus.oidc.auth-server-url to support testing a multitenant application with Dev Services For Keycloak. This configuration uses a Keycloak instance to authenticate users. The second configuration, provided by TenantConfigResolver , is used when an incoming request is mapped to the tenant-a tenant. Both configurations map to the same Keycloak server instance while using distinct realms . Alternatively, you can configure the tenant tenant-a directly in application.properties : # Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app # Tenant A configuration quarkus.oidc.tenant-a.auth-server-url=http://localhost:8180/realms/tenant-a quarkus.oidc.tenant-a.client-id=multi-tenant-client quarkus.oidc.tenant-a.credentials.secret=secret quarkus.oidc.tenant-a.application-type=web-app # HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated In that case, also use a custom TenantConfigResolver to resolve it: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } You can define multiple tenants in your configuration file. To map them correctly when resolving a tenant from your TenantResolver implementation, ensure each has a unique alias. However, using a static tenant resolution, which involves configuring tenants in application.properties and resolving them with TenantResolver , does not work for testing endpoints with Dev Services for Keycloak because it does not know how the requests are be mapped to individual tenants, and cannot dynamically provide tenant-specific quarkus.oidc.<tenant-id>.auth-server-url values. Therefore, using %prod prefixes with tenant-specific URLs within application.properties does not work in both test and development modes. Note When a current tenant represents an OIDC web-app application, the current io.vertx.ext.web.RoutingContext contains a tenant-id attribute by the time the custom tenant resolver has been called for all the requests completing the code authentication flow and the already authenticated requests, when either a tenant-specific state or session cookie already exists. Therefore, when working with multiple OIDC providers, you only need a path-specific check to resolve a tenant id if the RoutingContext does not have the tenant-id attribute set, for example: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = context.get("tenant-id"); if (tenantId != null) { return tenantId; } else { // Initial login request String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } } This is how Quarkus OIDC resolves static custom tenants if no custom TenantResolver is registered. A similar technique can be used with TenantConfigResolver , where a tenant-id provided in the context can return OidcTenantConfig already prepared with the request. Note If you also use Hibernate ORM multitenancy or MongoDB with Panache multitenancy and both tenant ids are the same and must be extracted from the Vert.x RoutingContext , you can pass the tenant id from the OIDC Tenant Resolver to the Hibernate ORM Tenant Resolver or MongoDB with Panache Mongo Database Resolver as a RoutingContext attribute, for example: public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = extractTenantId(context); context.put("tenantId", tenantId); return tenantId; } } 5.7. Starting and configuring the Keycloak server To start a Keycloak server, you can use Docker and run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev where keycloak.version is set to 25.0.6 or higher. Access your Keycloak server at localhost:8180 . Log in as the admin user to access the Keycloak administration console. The username and password are both admin . Now, import the realms for the two tenants: Import the default-tenant-realm.json to create the default realm. Import the tenant-a-realm.json to create the realm for the tenant tenant-a . For more information, see the Keycloak documentation about how to create a new realm . 5.8. Running and using the application 5.8.1. Running in developer mode To run the microservice in dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev 5.8.2. Running in JVM mode After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then run it: java -jar target/quarkus-app/quarkus-run.jar 5.8.3. Running in native mode This same demo can be compiled into native code; no modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary, and optimized to run with minimal resources. Compilation takes a bit longer, so this step is turned off by default; let's build again by enabling the native build: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.native.enabled=true After a little while, you can run this binary directly: ./target/security-openid-connect-multi-tenancy-quickstart-runner 5.9. Test the application 5.9.1. Use the browser To test the application, open your browser and access the following URL: http://localhost:8080/default If everything works as expected, you are redirected to the Keycloak server to authenticate. Be aware that the requested path defines a default tenant, which we don't have mapped in the configuration file. In this case, the default configuration is used. To authenticate to the application, enter the following credentials in the Keycloak login page: Username: alice Password: alice After clicking the Login button, you are redirected back to the application. If you try now to access the application at the following URL: http://localhost:8080/tenant-a You are redirected again to the Keycloak login page. However, this time, you are going to authenticate by using a different realm. In both cases, the landing page shows the user's name and email if the user is successfully authenticated. Although alice exists in both tenants, the application treats them as distinct users in separate realms. 5.10. Tenant resolution 5.10.1. Tenant resolution order OIDC tenants are resolved in the following order: io.quarkus.oidc.Tenant annotation is checked first if the proactive authentication is disabled. Dynamic tenant resolution using a custom TenantConfigResolver . Static tenant resolution using one of these options: custom TenantResolver , configured tenant paths, and defaulting to the last request path segment as a tenant id. Finally, the default OIDC tenant is selected if a tenant id has not been resolved after the preceding steps. See the following sections for more information: Resolve with annotations Dynamic tenant configuration resolution Static tenant configuration resolution Additionally, for the OIDC web-app applications, the state and session cookies also provide a hint about the tenant resolved with one of the above mentioned options at the time when the authorization code flow started. See the Tenant resolution for OIDC web-app applications section for more information. 5.10.2. Resolve with annotations You can use the io.quarkus.oidc.Tenant annotation for resolving the tenant identifiers as an alternative to using io.quarkus.oidc.TenantResolver . Note Proactive HTTP authentication must be disabled ( quarkus.http.auth.proactive=false ) for this to work. For more information, see the Proactive authentication guide. Assuming your application supports two OIDC tenants, the hr and default tenants, all resource methods and classes carrying @Tenant("hr") are authenticated by using the OIDC provider configured by quarkus.oidc.hr.auth-server-url . In contrast, all other classes and methods are still authenticated by using the default OIDC provider. import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import io.quarkus.oidc.Tenant; import io.quarkus.security.Authenticated; @Authenticated @Path("/api/hello") public class HelloResource { @Tenant("hr") 1 @GET @Produces(MediaType.TEXT_PLAIN) public String sayHello() { return "Hello!"; } } 1 The io.quarkus.oidc.Tenant annotation must be placed on either the resource class or resource method. Tip In the example above, authentication of the sayHello endpoint is enforced with the @Authenticated annotation. Alternatively, if you use an the HTTP Security policy to secure the endpoint, then, for the @Tenant annotation be effective, you must delay this policy's permission check as shown in the following example: quarkus.http.auth.permission.authenticated.paths=/api/hello quarkus.http.auth.permission.authenticated.methods=GET quarkus.http.auth.permission.authenticated.policy=authenticated quarkus.http.auth.permission.authenticated.applies-to=JAXRS 1 1 Tell Quarkus to run the HTTP permission check after the tenant has been selected with the @Tenant annotation. 5.10.3. Dynamic tenant configuration resolution If you need a more dynamic configuration for the different tenants you want to support and don't want to end up with multiple entries in your configuration file, you can use the io.quarkus.oidc.TenantConfigResolver . This interface allows you to dynamically create tenant configurations at runtime: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import java.util.function.Supplier; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TenantConfigResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } if ("tenant-c".equals(parts[1])) { // Do 'return requestContext.runBlocking(createTenantConfig());' // if a blocking call is required to create a tenant config, return Uni.createFrom().item(createTenantConfig()); } //Resolve to default tenant configuration return null; } private Supplier<OidcTenantConfig> createTenantConfig() { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId("tenant-c"); config.setAuthServerUrl("http://localhost:8180/realms/tenant-c"); config.setClientId("multi-tenant-client"); OidcTenantConfig.Credentials credentials = new OidcTenantConfig.Credentials(); credentials.setSecret("my-secret"); config.setCredentials(credentials); // Any other setting supported by the quarkus-oidc extension return () -> config; } } The OidcTenantConfig returned by this method is the same one used to parse the oidc namespace configuration from the application.properties . You can populate it by using any settings supported by the quarkus-oidc extension. If the dynamic tenant resolver returns null , a Static tenant configuration resolution is attempted . 5.10.4. Static tenant configuration resolution When you set multiple tenant configurations in the application.properties file, you only need to specify how the tenant identifier gets resolved. To configure the resolution of the tenant identifier, use one of the following options: Resolve with TenantResolver Configure tenant paths Use last request path segment as tenant id Resolve tenants with a token issuer claim These tenant resolution options are tried in the order they are listed until the tenant id gets resolved. If the tenant id remains unresolved ( null ), the default (unnamed) tenant configuration is selected. 5.10.4.1. Resolve with TenantResolver The following application.properties example shows how you can resolve the tenant identifier of two tenants named a and b by using the TenantResolver method: # Tenant 'a' configuration quarkus.oidc.a.auth-server-url=http://localhost:8180/realms/quarkus-a quarkus.oidc.a.client-id=client-a quarkus.oidc.a.credentials.secret=client-a-secret # Tenant 'b' configuration quarkus.oidc.b.auth-server-url=http://localhost:8180/realms/quarkus-b quarkus.oidc.b.client-id=client-b quarkus.oidc.b.credentials.secret=client-b-secret You can return the tenant id of either a or b from io.quarkus.oidc.TenantResolver : import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); if (path.endsWith("a")) { return "a"; } else if (path.endsWith("b")) { return "b"; } else { // default tenant return null; } } } In this example, the value of the last request path segment is a tenant id, but if required, you can implement a more complex tenant identifier resolution logic. 5.10.4.2. Configure tenant paths You can use the quarkus.oidc.tenant-paths configuration property for resolving the tenant identifier as an alternative to using io.quarkus.oidc.TenantResolver . Here is how you can select the hr tenant for the sayHello endpoint of the HelloResource resource used in the example: quarkus.oidc.hr.tenant-paths=/api/hello 1 quarkus.oidc.a.tenant-paths=/api/* 2 quarkus.oidc.b.tenant-paths=/*/hello 3 1 Same path-matching rules apply as for the quarkus.http.auth.permission.authenticated.paths=/api/hello configuration property from the example. 2 The wildcard placed at the end of the path represents any number of path segments. However the path is less specific than the /api/hello , therefore the hr tenant will be used to secure the sayHello endpoint. 3 The wildcard in the /*/hello represents exactly one path segment. Nevertheless, the wildcard is less specific than the api , therefore the hr tenant will be used. Tip Path-matching mechanism works exactly same as in the Authorization using configuration . 5.10.4.3. Use last request path segment as tenant id The default resolution for a tenant identifier is convention based, whereby the authentication request must include the tenant identifier in the last segment of the request path. The following application.properties example shows how you can configure two tenants named google and github : # Tenant 'google' configuration quarkus.oidc.google.provider=google quarkus.oidc.google.client-id=USD{google-client-id} quarkus.oidc.google.credentials.secret=USD{google-client-secret} quarkus.oidc.google.authentication.redirect-path=/signed-in # Tenant 'github' configuration quarkus.oidc.github.provider=github quarkus.oidc.github.client-id=USD{github-client-id} quarkus.oidc.github.credentials.secret=USD{github-client-secret} quarkus.oidc.github.authentication.redirect-path=/signed-in In the provided example, both tenants configure OIDC web-app applications to use an authorization code flow to authenticate users and require session cookies to be generated after authentication. After Google or GitHub authenticates the current user, the user gets returned to the /signed-in area for authenticated users, such as a secured resource path on the JAX-RS endpoint. Finally, to complete the default tenant resolution, set the following configuration property: quarkus.http.auth.permission.login.paths=/google,/github quarkus.http.auth.permission.login.policy=authenticated If the endpoint is running on http://localhost:8080 , you can also provide UI options for users to log in to either http://localhost:8080/google or http://localhost:8080/github , without having to add specific /google or /github JAX-RS resource paths. Tenant identifiers are also recorded in the session cookie names after the authentication is completed. Therefore, authenticated users can access the secured application area without requiring either the google or github path values to be included in the secured URL. Default resolution can also work for Bearer token authentication. Still, it might be less practical because a tenant identifier must always be set as the last path segment value. 5.10.4.4. Resolve tenants with a token issuer claim OIDC tenants which support Bearer token authentication can be resolved using the access token's issuer. The following conditions must be met for the issuer-based resolution to work: The access token must be in the JWT format and contain an issuer ( iss ) token claim. Only OIDC tenants with the application type service or hybrid are considered. These tenants must have a token issuer discovered or configured. The issuer-based resolution is enabled with the quarkus.oidc.resolve-tenants-with-issuer property. For example: quarkus.oidc.resolve-tenants-with-issuer=true 1 quarkus.oidc.tenant-a.auth-server-url=USD{tenant-a-oidc-provider} 2 quarkus.oidc.tenant-a.client-id=USD{tenant-a-client-id} quarkus.oidc.tenant-a.credentials.secret=USD{tenant-a-client-secret} quarkus.oidc.tenant-b.auth-server-url=USD{tenant-b-oidc-provider} 3 quarkus.oidc.tenant-b.discover-enabled=false quarkus.oidc.tenant-b.token.issuer=USD{tenant-b-oidc-provider}/issuer quarkus.oidc.tenant-b.jwks-path=/jwks quarkus.oidc.tenant-b.token-path=/tokens quarkus.oidc.tenant-b.client-id=USD{tenant-b-client-id} quarkus.oidc.tenant-b.credentials.secret=USD{tenant-b-client-secret} 1 Tenants tenant-a and tenant-b are resolved using a JWT access token's issuer iss claim value. 2 Tenant tenant-a discovers the issuer from the OIDC provider's well-known configuration endpoint. 3 Tenant tenant-b configures the issuer because its OIDC provider does not support the discovery. 5.10.5. Tenant resolution for OIDC web-app applications Tenant resolution for the OIDC web-app applications must be done at least 3 times during an authorization code flow, when the OIDC tenant-specific configuration affects how each of the following steps is run. Step 1: Unauthenticated user accesses an endpoint and is redirected to OIDC provider When an unauthenticated user accesses a secured path, the user is redirected to the OIDC provider to authenticate and the tenant configuration is used to build the redirect URI. All the static and dynamic tenant resolution options listed in the Static tenant configuration resolution and Dynamic tenant configuration resolution sections can be used to resolve a tenant. Step 2: The user is redirected back to the endpoint After the provider authentication, the user is redirected back to the Quarkus endpoint and the tenant configuration is used to complete the authorization code flow. All the static and dynamic tenant resolution options listed in the Static tenant configuration resolution and Dynamic tenant configuration resolution sections can be used to resolve a tenant. Before the tenant resolution begins, the authorization code flow state cookie is used to set the already resolved tenant configuration id as a RoutingContext tenant-id attribute: both custom dynamic TenantConfigResolver and static TenantResolver tenant resolvers can check it. Step 3: Authenticated user accesses the secured path using the session cookie The tenant configuration determines how the session cookie is verified and refreshed. Before the tenant resolution begins, the authorization code flow session cookie is used to set the already resolved tenant configuration id as a RoutingContext tenant-id attribute: both custom dynamic TenantConfigResolver and static TenantResolver tenant resolvers can check it. For example, here is how a custom TenantConfigResolver can avoid creating the already resolved tenant configuration, that may otherwise require blocking reads from the database or other remote sources: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { 1 return null; } String path = context.request().path(); 2 if (path.endsWith("tenant-a")) { return Uni.createFrom().item(createTenantConfig("tenant-a", "client-a", "secret-a")); } else if (path.endsWith("tenant-b")) { return Uni.createFrom().item(createTenantConfig("tenant-b", "client-b", "secret-b")); } // Default tenant id return null; } private OidcTenantConfig createTenantConfig(String tenantId, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl("http://localhost:8180/realms/" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); return config; } } 1 Let Quarkus use the already resolved tenant configuration if it has been resolved earlier. 2 Check the request path to create tenant configurations. The default configuration may look like this: quarkus.oidc.auth-server-url=http://localhost:8180/realms/default quarkus.oidc.client-id=client-default quarkus.oidc.credentials.secret=secret-default quarkus.oidc.application-type=web-app The preceding example assumes that the tenant-a , tenant-b and default tenants are all used to protect the same endpoint paths. In other words, after the user has authenticated with the tenant-a configuration, this user will not be able to choose to authenticate with the tenant-b or default configuration before this user logs out and has a session cookie cleared or expired. The situation where multiple OIDC web-app tenants protect the tenant-specific paths is less typical and also requires an extra care. When multiple OIDC web-app tenants such as tenant-a , tenant-b and default tenants are used to control access to the tenant specific paths, the users authenticated with one OIDC provider must not be able to access the paths requiring an authentication with another provider, otherwise the results can be unpredictable, most likely causing unexpected authentication failures. For example, if the tenant-a authentication requires a Keycloak authentication and the tenant-b authentication requires an Auth0 authentication, then, if the tenant-a authenticated user attempts to access a path secured by the tenant-b configuration, then the session cookie will not be verified, since the Auth0 public verification keys can not be used to verify the tokens signed by Keycloak. An easy, recommended way to avoid multiple web-app tenants conflicting with each other is to set the tenant specific session path as shown in the following example: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { 1 return null; } String path = context.request().path(); 2 if (path.endsWith("tenant-a")) { return Uni.createFrom().item(createTenantConfig("tenant-a", "/tenant-a", "client-a", "secret-a")); } else if (path.endsWith("tenant-b")) { return Uni.createFrom().item(createTenantConfig("tenant-b", "/tenant-b", "client-b", "secret-b")); } // Default tenant id return null; } private OidcTenantConfig createTenantConfig(String tenantId, String cookiePath, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl("http://localhost:8180/realms/" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); config.getAuthentication().setCookiePath(cookiePath); 3 return config; } } 1 Let Quarkus use the already resolved tenant configuration if it has been resolved earlier. 2 Check the request path to create tenant configurations. 3 Set the tenant-specific cookie paths which makes sure the session cookie is only visible to the tenant which created it. The default tenant configuration should be adjusted like this: quarkus.oidc.auth-server-url=http://localhost:8180/realms/default quarkus.oidc.client-id=client-default quarkus.oidc.credentials.secret=secret-default quarkus.oidc.authentication.cookie-path=/default quarkus.oidc.application-type=web-app Having the same session cookie path when multiple OIDC web-app tenants protect the tenant-specific paths is not recommended and should be avoided as it requires even more care from the custom resolvers, for example: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); 1 if (path.endsWith("tenant-a")) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { if ("tenant-a".equals(resolvedTenantId)) { 2 return null; } else { // Require a "tenant-a" authentication context.remove(OidcUtils.TENANT_ID_ATTRIBUTE); 3 } } return Uni.createFrom().item(createTenantConfig("tenant-a", "client-a", "secret-a")); } else if (path.endsWith("tenant-b")) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { if ("tenant-b".equals(resolvedTenantId)) { 4 return null; } else { // Require a "tenant-b" authentication context.remove(OidcUtils.TENANT_ID_ATTRIBUTE); 5 } } return Uni.createFrom().item(createTenantConfig("tenant-b", "client-b", "secret-b")); } // Set default tenant id context.put(OidcUtils.TENANT_ID_ATTRIBUTE, OidcUtils.DEFAULT_TENANT_ID); 6 return null; } private OidcTenantConfig createTenantConfig(String tenantId, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl("http://localhost:8180/realms/" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); return config; } } 1 Check the request path to create tenant configurations. 2 4 Let Quarkus use the already resolved tenant configuration if the already resolved tenant is expected for the current path. 3 5 Remove the tenant-id attribute if the already resolved tenant configuration is not expected for the current path. 6 Use the default tenant for all other paths. It is equivalent to removing the tenant-id attribute. 5.11. Disabling tenant configurations Custom TenantResolver and TenantConfigResolver implementations might return null if no tenant can be inferred from the current request and a fallback to the default tenant configuration is required. If you expect the custom resolvers always to resolve a tenant, you do not need to configure the default tenant resolution. To turn off the default tenant configuration, set quarkus.oidc.tenant-enabled=false . Note The default tenant configuration is automatically disabled when quarkus.oidc.auth-server-url is not configured, but either custom tenant configurations are available or TenantConfigResolver is registered. Be aware that tenant-specific configurations can also be disabled, for example: quarkus.oidc.tenant-a.tenant-enabled=false . 5.12. References OIDC configuration properties Keycloak Documentation OpenID Connect JSON Web Token Google OpenID Connect Quarkus Security overview | [
"quarkus create app org.acme:security-openid-connect-multi-tenancy-quickstart --extension='oidc,rest-jackson' --no-code cd security-openid-connect-multi-tenancy-quickstart",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart -Dextensions='oidc,rest-jackson' -DnoCode cd security-openid-connect-multi-tenancy-quickstart",
"quarkus extension add oidc",
"./mvnw quarkus:add-extension -Dextensions='oidc'",
"./gradlew addExtension --extensions='oidc'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-oidc\")",
"package org.acme.quickstart.oidc; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; @Path(\"/{tenant}\") public class HomeResource { /** * Injection point for the ID Token issued by the OIDC provider. */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the Access Token issued by the OIDC provider. */ @Inject JsonWebToken accessToken; /** * Returns the ID Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return ID Token info */ @GET @Produces(\"text/html\") public String getIdTokenInfo() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\"); response.append(\"<h2>Welcome, \").append(this.idToken.getClaim(\"email\").toString()).append(\"</h2>\\n\"); response.append(\"<h3>You are accessing the application within tenant <b>\").append(idToken.getIssuer()).append(\" boundaries</b></h3>\"); return response.append(\"</body>\").append(\"</html>\").toString(); } /** * Returns the Access Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return Access Token info */ @GET @Produces(\"text/html\") @Path(\"bearer\") public String getAccessTokenInfo() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\"); response.append(\"<h2>Welcome, \").append(this.accessToken.getClaim(\"email\").toString()).append(\"</h2>\\n\"); response.append(\"<h3>You are accessing the application within tenant <b>\").append(accessToken.getIssuer()).append(\" boundaries</b></h3>\"); return response.append(\"</body>\").append(\"</html>\").toString(); } }",
"package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.config.ConfigProvider; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); if (path.startsWith(\"/tenant-a\")) { String keycloakUrl = ConfigProvider.getConfig().getValue(\"keycloak.url\", String.class); OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(\"tenant-a\"); config.setAuthServerUrl(keycloakUrl + \"/realms/tenant-a\"); config.setClientId(\"multi-tenant-client\"); config.getCredentials().setSecret(\"secret\"); config.setApplicationType(ApplicationType.HYBRID); return Uni.createFrom().item(config); } else { // resolve to default tenant config return Uni.createFrom().nullItem(); } } }",
"Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app Tenant A configuration is created dynamically in CustomTenantConfigResolver HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated",
"Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app Tenant A configuration quarkus.oidc.tenant-a.auth-server-url=http://localhost:8180/realms/tenant-a quarkus.oidc.tenant-a.client-id=multi-tenant-client quarkus.oidc.tenant-a.credentials.secret=secret quarkus.oidc.tenant-a.application-type=web-app HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated",
"package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } }",
"package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = context.get(\"tenant-id\"); if (tenantId != null) { return tenantId; } else { // Initial login request String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } }",
"public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = extractTenantId(context); context.put(\"tenantId\", tenantId); return tenantId; } }",
"docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.native.enabled=true",
"./target/security-openid-connect-multi-tenancy-quickstart-runner",
"import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import io.quarkus.oidc.Tenant; import io.quarkus.security.Authenticated; @Authenticated @Path(\"/api/hello\") public class HelloResource { @Tenant(\"hr\") 1 @GET @Produces(MediaType.TEXT_PLAIN) public String sayHello() { return \"Hello!\"; } }",
"quarkus.http.auth.permission.authenticated.paths=/api/hello quarkus.http.auth.permission.authenticated.methods=GET quarkus.http.auth.permission.authenticated.policy=authenticated quarkus.http.auth.permission.authenticated.applies-to=JAXRS 1",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import java.util.function.Supplier; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TenantConfigResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } if (\"tenant-c\".equals(parts[1])) { // Do 'return requestContext.runBlocking(createTenantConfig());' // if a blocking call is required to create a tenant config, return Uni.createFrom().item(createTenantConfig()); } //Resolve to default tenant configuration return null; } private Supplier<OidcTenantConfig> createTenantConfig() { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(\"tenant-c\"); config.setAuthServerUrl(\"http://localhost:8180/realms/tenant-c\"); config.setClientId(\"multi-tenant-client\"); OidcTenantConfig.Credentials credentials = new OidcTenantConfig.Credentials(); credentials.setSecret(\"my-secret\"); config.setCredentials(credentials); // Any other setting supported by the quarkus-oidc extension return () -> config; } }",
"Tenant 'a' configuration quarkus.oidc.a.auth-server-url=http://localhost:8180/realms/quarkus-a quarkus.oidc.a.client-id=client-a quarkus.oidc.a.credentials.secret=client-a-secret Tenant 'b' configuration quarkus.oidc.b.auth-server-url=http://localhost:8180/realms/quarkus-b quarkus.oidc.b.client-id=client-b quarkus.oidc.b.credentials.secret=client-b-secret",
"import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); if (path.endsWith(\"a\")) { return \"a\"; } else if (path.endsWith(\"b\")) { return \"b\"; } else { // default tenant return null; } } }",
"quarkus.oidc.hr.tenant-paths=/api/hello 1 quarkus.oidc.a.tenant-paths=/api/* 2 quarkus.oidc.b.tenant-paths=/*/hello 3",
"Tenant 'google' configuration quarkus.oidc.google.provider=google quarkus.oidc.google.client-id=USD{google-client-id} quarkus.oidc.google.credentials.secret=USD{google-client-secret} quarkus.oidc.google.authentication.redirect-path=/signed-in Tenant 'github' configuration quarkus.oidc.github.provider=github quarkus.oidc.github.client-id=USD{github-client-id} quarkus.oidc.github.credentials.secret=USD{github-client-secret} quarkus.oidc.github.authentication.redirect-path=/signed-in",
"quarkus.http.auth.permission.login.paths=/google,/github quarkus.http.auth.permission.login.policy=authenticated",
"quarkus.oidc.resolve-tenants-with-issuer=true 1 quarkus.oidc.tenant-a.auth-server-url=USD{tenant-a-oidc-provider} 2 quarkus.oidc.tenant-a.client-id=USD{tenant-a-client-id} quarkus.oidc.tenant-a.credentials.secret=USD{tenant-a-client-secret} quarkus.oidc.tenant-b.auth-server-url=USD{tenant-b-oidc-provider} 3 quarkus.oidc.tenant-b.discover-enabled=false quarkus.oidc.tenant-b.token.issuer=USD{tenant-b-oidc-provider}/issuer quarkus.oidc.tenant-b.jwks-path=/jwks quarkus.oidc.tenant-b.token-path=/tokens quarkus.oidc.tenant-b.client-id=USD{tenant-b-client-id} quarkus.oidc.tenant-b.credentials.secret=USD{tenant-b-client-secret}",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { 1 return null; } String path = context.request().path(); 2 if (path.endsWith(\"tenant-a\")) { return Uni.createFrom().item(createTenantConfig(\"tenant-a\", \"client-a\", \"secret-a\")); } else if (path.endsWith(\"tenant-b\")) { return Uni.createFrom().item(createTenantConfig(\"tenant-b\", \"client-b\", \"secret-b\")); } // Default tenant id return null; } private OidcTenantConfig createTenantConfig(String tenantId, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl(\"http://localhost:8180/realms/\" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); return config; } }",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/default quarkus.oidc.client-id=client-default quarkus.oidc.credentials.secret=secret-default quarkus.oidc.application-type=web-app",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { 1 return null; } String path = context.request().path(); 2 if (path.endsWith(\"tenant-a\")) { return Uni.createFrom().item(createTenantConfig(\"tenant-a\", \"/tenant-a\", \"client-a\", \"secret-a\")); } else if (path.endsWith(\"tenant-b\")) { return Uni.createFrom().item(createTenantConfig(\"tenant-b\", \"/tenant-b\", \"client-b\", \"secret-b\")); } // Default tenant id return null; } private OidcTenantConfig createTenantConfig(String tenantId, String cookiePath, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl(\"http://localhost:8180/realms/\" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); config.getAuthentication().setCookiePath(cookiePath); 3 return config; } }",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/default quarkus.oidc.client-id=client-default quarkus.oidc.credentials.secret=secret-default quarkus.oidc.authentication.cookie-path=/default quarkus.oidc.application-type=web-app",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); 1 if (path.endsWith(\"tenant-a\")) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { if (\"tenant-a\".equals(resolvedTenantId)) { 2 return null; } else { // Require a \"tenant-a\" authentication context.remove(OidcUtils.TENANT_ID_ATTRIBUTE); 3 } } return Uni.createFrom().item(createTenantConfig(\"tenant-a\", \"client-a\", \"secret-a\")); } else if (path.endsWith(\"tenant-b\")) { String resolvedTenantId = context.get(OidcUtils.TENANT_ID_ATTRIBUTE); if (resolvedTenantId != null) { if (\"tenant-b\".equals(resolvedTenantId)) { 4 return null; } else { // Require a \"tenant-b\" authentication context.remove(OidcUtils.TENANT_ID_ATTRIBUTE); 5 } } return Uni.createFrom().item(createTenantConfig(\"tenant-b\", \"client-b\", \"secret-b\")); } // Set default tenant id context.put(OidcUtils.TENANT_ID_ATTRIBUTE, OidcUtils.DEFAULT_TENANT_ID); 6 return null; } private OidcTenantConfig createTenantConfig(String tenantId, String clientId, String secret) { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(tenantId); config.setAuthServerUrl(\"http://localhost:8180/realms/\" + tenantId); config.setClientId(clientId); config.getCredentials().setSecret(secret); config.setApplicationType(ApplicationType.WEB_APP); return config; } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/security-openid-connect-multitenancy |
Virtualization Administration Guide | Virtualization Administration Guide Red Hat Enterprise Linux 6 Managing your virtual environment Jiri Herrmann Red Hat Customer Content Services [email protected] Yehuda Zimmerman Red Hat Customer Content Services Laura Novich Red Hat Customer Content Services Scott Radvan Red Hat Customer Content Services Dayle Parker Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/index |
Chapter 2. Securing management interfaces and applications | Chapter 2. Securing management interfaces and applications 2.1. Adding authentication and authorization to management interfaces You can add authentication and authorization for management interfaces to secure them by using a security domain. To access the management interfaces after you add authentication and authorization, users must enter login credentials. You can secure JBoss EAP management interfaces as follows: Management CLI By configuring a sasl-authentication-factory . Management console By configuring an http-authentication-factory . Prerequisites You have created a security domain referencing a security realm. JBoss EAP is running. Procedure Create an http-authentication-factory , or a sasl-authentication-factory . Create an http-authentication-factory . Syntax Example Create a sasl-authentication-factory . Syntax Example Update the management interfaces. Use the http-authentication-factory to secure the management console. Syntax Example Use the sasl-authentication-factory to secure the management CLI. Syntax Example Reload the server. Verification To verify that the management console requires authentication and authorization, navigate to the management console at http://127.0.0.1:9990/console/index.html . You are prompted to enter user name and password. To verify that the management CLI requires authentication and authorization, start the management CLI using the following command: You are prompted to enter user name and password. Additional resources http-authentication-factory attributes sasl-authentication-factory attributes 2.2. Using a security domain to authenticate and authorize application users Use a security domain that references a security realm to authenticate and authorize application users. The procedures for developing an application are provided only as an example. 2.2.1. Developing a simple web application You can create a simple web application to follow along with the configuring security realms examples. Note The following procedures are provided as an example only. If you already have an application that you want to secure, you can skip these and go directly to Adding authentication and authorization to applications . 2.2.1.1. Creating a Maven project for web-application development For creating a web-application, create a Maven project with the required dependencies and the directory structure. Important The following procedure is provided only as an example and should not be used in a production environment. For information about creating applications for JBoss EAP, see Getting started with developing applications for JBoss EAP deployment . Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.maven.war.plugin>3.4.0</version.maven.war.plugin> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>4.2.2.Final</version> </plugin> </plugins> </build> </project> Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create a web-application. 2.2.1.2. Creating a web application Create a web application containing a servlet that returns the user name obtained from the logged-in user's principal. If there is no logged-in user, the servlet returns the text "NO AUTHENTICATED USER". In this procedure, <application_home> refers to the directory that contains the pom.xml configuration file for the application. Prerequisites You have created a Maven project. For more information, see Creating a Maven project for web-application development . JBoss EAP is running. Procedure Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a file SecuredServlet.java with the following content: package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. It returns the user name of obtained * from the logged-in user's Principal. If there is no logged-in user, * it returns the text "NO AUTHENTICATED USER". */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } In the application root directory, compile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You get the following message: Because no authentication mechanism is added, you can access the application. You can now secure this application by using a security domain so that only authenticated users can access it. 2.2.2. Adding authentication and authorization to applications You can add authentication and authorization to web applications to secure them by using a security domain. To access the web applications after you add authentication and authorization, users must enter login credentials. Prerequisites You have created a security domain referencing a security realm. You have deployed applications on JBoss EAP. JBoss EAP is running. Procedure Configure an application-security-domain in the undertow subsystem : Syntax Example Configure the application's web.xml to protect the application resources. Syntax <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name><!-- Name of the resources to protect --></web-resource-name> <url-pattern> <!-- The URL to protect --></url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name> <!-- Role name as defined in the security domain --></role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method> <!-- The authentication method to use. Can be: BASIC CLIENT-CERT DIGEST FORM SPNEGO --> </auth-method> <realm-name><!-- The name of realm to send in the challenge --></realm-name> </login-config> </web-app> Example <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name>all</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name>Admin</role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>exampleSecurityRealm</realm-name> </login-config> </web-app> Note You can use a different auth-method . Configure your application to use a security domain by either creating a jboss-web.xml file in your application or setting the default security domain in the undertow subsystem. Create jboss-web.xml file in the your application's WEB-INF directory referencing the application-security-domain . Syntax <jboss-web> <security-domain> <!-- The security domain to associate with the application --></security-domain> </jboss-web> Example <jboss-web> <security-domain>exampleApplicationSecurityDomain</security-domain> </jboss-web> Set the default security domain in the undertow subsystem for applications. Syntax Example Reload the server. Verification In the application root directory, compile your application with the following command: Deploy the application. In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You get a login prompt confirming that authentication is now required to access the application. Your application is now secured with a security domain and users can log in only after authenticating. Additionally, only users with specified roles can access the application. | [
"/subsystem=elytron/http-authentication-factory= <authentication_factory_name> :add(http-server-mechanism-factory=global, security-domain= <security_domain_name> , mechanism-configurations=[{mechanism-name= <mechanism-name> , mechanism-realm-configurations=[{realm-name= <realm_name> }]}])",
"/subsystem=elytron/http-authentication-factory=exampleAuthenticationFactory:add(http-server-mechanism-factory=global, security-domain=exampleSecurityDomain, mechanism-configurations=[{mechanism-name=BASIC, mechanism-realm-configurations=[{realm-name=exampleSecurityRealm}]}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/sasl-authentication-factory= <sasl_authentication_factory_name> :add(security-domain= <security_domain> ,sasl-server-factory=configured,mechanism-configurations=[{mechanism-name= <mechanism-name> ,mechanism-realm-configurations=[{realm-name= <realm_name> }]}])",
"/subsystem=elytron/sasl-authentication-factory=exampleSaslAuthenticationFactory:add(security-domain=exampleSecurityDomain,sasl-server-factory=configured,mechanism-configurations=[{mechanism-name=PLAIN,mechanism-realm-configurations=[{realm-name=exampleSecurityRealm}]}]) {\"outcome\" => \"success\"}",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value= <authentication_factory_name> )",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=exampleAuthenticationFactory) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade,value={enabled=true,sasl-authentication-factory= <sasl_authentication_factory> })",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade,value={enabled=true,sasl-authentication-factory=exampleSaslAuthenticationFactory}) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"bin/jboss-cli.sh --connect",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-webapp-example",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.maven.war.plugin>3.4.0</version.maven.war.plugin> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>4.2.2.Final</version> </plugin> </plugins> </build> </project>",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.795 s [INFO] Finished at: 2022-04-28T17:39:48+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p src/main/java/<path_based_on_artifactID>",
"mkdir -p src/main/java/com/example/app",
"cd src/main/java/<path_based_on_artifactID>",
"cd src/main/java/com/example/app",
"package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. It returns the user name of obtained * from the logged-in user's Principal. If there is no logged-in user, * it returns the text \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------",
"mvn wildfly:deploy",
"Secured Servlet Current Principal 'NO AUTHENTICATED USER'",
"/subsystem=undertow/application-security-domain= <application_security_domain_name> :add(security-domain= <security_domain_name> )",
"/subsystem=undertow/application-security-domain=exampleApplicationSecurityDomain:add(security-domain=exampleSecurityDomain) {\"outcome\" => \"success\"}",
"<!DOCTYPE web-app PUBLIC \"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\" \"http://java.sun.com/dtd/web-app_2_3.dtd\" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name><!-- Name of the resources to protect --></web-resource-name> <url-pattern> <!-- The URL to protect --></url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name> <!-- Role name as defined in the security domain --></role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method> <!-- The authentication method to use. Can be: BASIC CLIENT-CERT DIGEST FORM SPNEGO --> </auth-method> <realm-name><!-- The name of realm to send in the challenge --></realm-name> </login-config> </web-app>",
"<!DOCTYPE web-app PUBLIC \"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\" \"http://java.sun.com/dtd/web-app_2_3.dtd\" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name>all</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name>Admin</role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>exampleSecurityRealm</realm-name> </login-config> </web-app>",
"<jboss-web> <security-domain> <!-- The security domain to associate with the application --></security-domain> </jboss-web>",
"<jboss-web> <security-domain>exampleApplicationSecurityDomain</security-domain> </jboss-web>",
"/subsystem=undertow:write-attribute(name=default-security-domain,value= <application_security_domain_to_use> )",
"/subsystem=undertow:write-attribute(name=default-security-domain,value=exampleApplicationSecurityDomain) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------",
"mvn wildfly:deploy"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_an_identity_store/securing_management_interfaces_and_applications |
Chapter 79. Log | Chapter 79. Log Only producer is supported The Log component logs message exchanges to the underlying logging mechanism. Camel uses SLF4J which allows you to configure logging via, among others: Log4j Logback Java Util Logging 79.1. Dependencies When using log with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-log-starter</artifactId> </dependency> 79.2. URI format Where loggingCategory is the name of the logging category to use. You can append query options to the URI in the following format, ?option=value&option=value&... Note Using Logger instance from the Registry If there's single instance of org.slf4j.Logger found in the Registry, the loggingCategory is no longer used to create logger instance. The registered instance is used instead. Also it is possible to reference particular Logger instance using ?logger=#myLogger URI parameter. Eventually, if there's no registered and URI logger parameter, the logger instance is created using loggingCategory . For example, a log endpoint typically specifies the logging level using the level option, as follows: The default logger logs every exchange ( regular logging ). But Camel also ships with the Throughput logger, which is used whenever the groupSize option is specified. Note Also a log in the DSL There is also a log directly in the DSL, but it has a different purpose. Its meant for lightweight and human logs. See more details at LogEIP. 79.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 79.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 79.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 79.4. Component Options The Log component supports 3 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 79.5. Endpoint Options The Log endpoint is configured using URI syntax: with the following path and query parameters: 79.5.1. Path Parameters (1 parameters) Name Description Default Type loggerName (producer) Required Name of the logging category to use. String 79.5.2. Query Parameters (27 parameters) Name Description Default Type groupActiveOnly (producer) If true, will hide stats when no new messages have been received for a time interval, if false, show stats regardless of message traffic. true Boolean groupDelay (producer) Set the initial delay for stats (in millis). Long groupInterval (producer) If specified will group message stats by this time interval (in millis). Long groupSize (producer) An integer that specifies a group size for throughput logging. Integer lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean level (producer) Logging level to use. The default value is INFO. Enum values: TRACE DEBUG INFO WARN ERROR OFF INFO String logMask (producer) If true, mask sensitive information like password or passphrase in the log. Boolean marker (producer) An optional Marker name to use. String exchangeFormatter (advanced) To use a custom exchange formatter. ExchangeFormatter maxChars (formatting) Limits the number of characters logged per line. 10000 int multiline (formatting) If enabled then each information is outputted on a newline. false boolean showAll (formatting) Quick option for turning all options on. (multiline, maxChars has to be manually set if to be used). false boolean showAllProperties (formatting) Show all of the exchange properties (both internal and custom). false boolean showBody (formatting) Show the message body. true boolean showBodyType (formatting) Show the body Java type. true boolean showCaughtException (formatting) If the exchange has a caught exception, show the exception message (no stack trace). A caught exception is stored as a property on the exchange (using the key org.apache.camel.Exchange#EXCEPTION_CAUGHT) and for instance a doCatch can catch exceptions. false boolean showException (formatting) If the exchange has an exception, show the exception message (no stacktrace). false boolean showExchangeId (formatting) Show the unique exchange ID. false boolean showExchangePattern (formatting) Shows the Message Exchange Pattern (or MEP for short). true boolean showFiles (formatting) If enabled Camel will output files. false boolean showFuture (formatting) If enabled Camel will on Future objects wait for it to complete to obtain the payload to be logged. false boolean showHeaders (formatting) Show the message headers. false boolean showProperties (formatting) Show the exchange properties (only custom). Use showAllProperties to show both internal and custom properties. false boolean showStackTrace (formatting) Show the stack trace, if an exchange has an exception. Only effective if one of showAll, showException or showCaughtException are enabled. false boolean showStreams (formatting) Whether Camel should show stream bodies or not (eg such as java.io.InputStream). Beware if you enable this option then you may not be able later to access the message body as the stream have already been read by this logger. To remedy this you will have to use Stream Caching. false boolean skipBodyLineSeparator (formatting) Whether to skip line separators when logging the message body. This allows to log the message body in one line, setting this option to false will preserve any line separators from the body, which then will log the body as is. true boolean style (formatting) Sets the outputs style to use. Enum values: Default Tab Fixed Default OutputStyle 79.6. Regular logger sample In the route below we log the incoming orders at DEBUG level before the order is processed: from("activemq:orders").to("log:com.mycompany.order?level=DEBUG").to("bean:processOrder"); Or using Spring XML to define the route: <route> <from uri="activemq:orders"/> <to uri="log:com.mycompany.order?level=DEBUG"/> <to uri="bean:processOrder"/> </route> 79.7. Regular logger with formatter sample In the route below we log the incoming orders at INFO level before the order is processed. from("activemq:orders"). to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder"); 79.8. Throughput logger with groupSize sample In the route below we log the throughput of the incoming orders at DEBUG level grouped by 10 messages. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder"); 79.9. Throughput logger with groupInterval sample This route will result in message stats logged every 10s, with an initial 60s delay and stats should be displayed even if there isn't any message traffic. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false").to("bean:processOrder"); The following will be logged: 79.10. Masking sensitive information like password You can enable security masking for logging by setting logMask flag to true . Note that this option also affects Log EIP. To enable mask in Java DSL at CamelContext level: camelContext.setLogMask(true); And in XML: <camelContext logMask="true"> You can also turn it on|off at endpoint level. To enable mask in Java DSL at endpoint level, add logMask=true option in the URI for the log endpoint: from("direct:start").to("log:foo?logMask=true"); And in XML: <route> <from uri="direct:foo"/> <to uri="log:foo?logMask=true"/> </route> org.apache.camel.support.processor.DefaultMaskingFormatter is used for the masking by default. If you want to use a custom masking formatter, put it into registry with the name CamelCustomLogMask . Note that the masking formatter must implement org.apache.camel.spi.MaskingFormatter . 79.11. Full customization of the logging output With the options outlined in the section, you can control much of the output of the logger. However, log lines will always follow this structure: This format is unsuitable in some cases, perhaps because you need to... Filter the headers and properties that are printed, to strike a balance between insight and verbosity. Adjust the log message to whatever you deem most readable. Tailor log messages for digestion by log mining systems, e.g. Splunk. Print specific body types differently. Whenever you require absolute customization, you can create a class that implements the interface. Within the format(Exchange) method you have access to the full Exchange, so you can select and extract the precise information you need, format it in a custom manner and return it. The return value will become the final log message. You can have the Log component pick up your custom ExchangeFormatter in either of two ways: Explicitly instantiating the LogComponent in your Registry: <bean name="log" class="org.apache.camel.component.log.LogComponent"> <property name="exchangeFormatter" ref="myCustomFormatter" /> </bean> 79.11.1. Convention over configuration Simply by registering a bean with the name logFormatter ; the Log Component is intelligent enough to pick it up automatically. <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" /> Note The ExchangeFormatter gets applied to all Log endpoints within that Camel Context . If you need different ExchangeFormatters for different endpoints, just instantiate the LogComponent as many times as needed, and use the relevant bean name as the endpoint prefix. When using a custom log formatter, you can specify parameters in the log uri, which gets configured on the custom log formatter. Though when you do that you should define the "logFormatter" as prototype scoped so its not shared if you have different parameters, for example, <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" scope="prototype"/> And then we can have Camel routes using the log uri with different options: <to uri="log:foo?param1=foo&param2=100"/> <to uri="log:bar?param1=bar&param2=200"/> 79.12. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.log.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.log.enabled Whether to enable auto configuration of the log component. This is enabled by default. Boolean camel.component.log.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.log.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-log-starter</artifactId> </dependency>",
"log:loggingCategory[?options]",
"log:org.apache.camel.example?level=DEBUG",
"log:loggerName",
"from(\"activemq:orders\").to(\"log:com.mycompany.order?level=DEBUG\").to(\"bean:processOrder\");",
"<route> <from uri=\"activemq:orders\"/> <to uri=\"log:com.mycompany.order?level=DEBUG\"/> <to uri=\"bean:processOrder\"/> </route>",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?showAll=true&multiline=true\").to(\"bean:processOrder\");",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupSize=10\").to(\"bean:processOrder\");",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false\").to(\"bean:processOrder\");",
"\"Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100\"",
"camelContext.setLogMask(true);",
"<camelContext logMask=\"true\">",
"from(\"direct:start\").to(\"log:foo?logMask=true\");",
"<route> <from uri=\"direct:foo\"/> <to uri=\"log:foo?logMask=true\"/> </route>",
"Exchange[Id:ID-machine-local-50656-1234567901234-1-2, ExchangePattern:InOut, Properties:{CamelToEndpoint=log://org.apache.camel.component.log.TEST?showAll=true, CamelCreatedTimestamp=Thu Mar 28 00:00:00 WET 2013}, Headers:{breadcrumbId=ID-machine-local-50656-1234567901234-1-1}, BodyType:String, Body:Hello World, Out: null]",
"<bean name=\"log\" class=\"org.apache.camel.component.log.LogComponent\"> <property name=\"exchangeFormatter\" ref=\"myCustomFormatter\" /> </bean>",
"<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" />",
"<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" scope=\"prototype\"/>",
"<to uri=\"log:foo?param1=foo&param2=100\"/> <to uri=\"log:bar?param1=bar&param2=200\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-log-component-starter |
7.11. Creating a Virtual Machine Based on a Template | 7.11. Creating a Virtual Machine Based on a Template Create a virtual machine from a template to enable the virtual machines to be pre-configured with an operating system, network interfaces, applications and other resources. Note Virtual machines created from a template depend on that template. So you cannot remove a template from the Manager if a virtual machine was created from that template. However, you can clone a virtual machine from a template to remove the dependency on that template. Note If the BIOS type of the virtual machine differs from the BIOS type of the template, the Manager might change devices in the virtual machine, possibly preventing the operating system from booting. For example, if the template uses IDE disks and the i440fx chipset, changing the BIOS type to the Q35 chipset automatically changes the IDE disks to SATA disks. So configure the chipset and BIOS type to match the chipset and BIOS type of the template. Creating a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Template list. Enter a Name , Description , and any Comments , and accept the default values inherited from the template in the rest of the fields. You can change them if needed. Click the Resource Allocation tab. Select the Thin or Clone radio button in the Storage Allocation area. If you select Thin , the disk format is QCOW2. If you select Clone , select either QCOW2 or Raw for disk format. Use the Target drop-down list to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . The virtual machine is displayed in the Virtual Machines tab. Additional Resources Creating a cloned virtual machine based on a template UEFI and the Q35 chipset in the Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/creating_a_virtual_machine_based_on_a_template |
Chapter 2. Installing JBoss Web Server on Red Hat Enterprise Linux from archive files | Chapter 2. Installing JBoss Web Server on Red Hat Enterprise Linux from archive files You can install JBoss Web Server on Red Hat Enterprise Linux (RHEL) from archive files or RPM packages. If you want to install JBoss Web Server from archive files, you can download and extract the JBoss Web Server archive files from the Red Hat Customer Portal . When you install JBoss Web Server from an archive file, you can manage the product in different ways. For example, you can use a system daemon at system startup or manage JBoss Web Server from a command line. Note You can install JBoss Web Server on RHEL versions 8 and 9. Red Hat does not provide a distribution of JBoss Web Server 6.x for RHEL 7 systems. 2.1. Prerequisites You have installed a supported Java Development Kit (JDK) by using the DNF package manager or from a compressed archive. Your system is compliant with Red Hat Enterprise Linux package requirements. 2.1.1. Installing a JDK by using the DNF package manager You can use the DNF package manager to install a Java Development Kit (JDK). For a full list of supported JDKs, see JBoss Web Server operating systems and configurations . Note This procedure describes how to install OpenJDK. If you want to install the Oracle JDK, see the Oracle documentation for more information. Procedure Subscribe your Red Hat Enterprise Linux system to the appropriate channel: rhel-8-server-rpms rhel-9-server-rpms To install a supported JDK version, enter the following command as the root user: In the preceding command, replace java- <version> with java-11 or java-17 . Note JBoss Web Server 6.x does not support OpenJDK 8. To ensure the correct JDK is in use, enter the following command as the root user: The preceding command returns a list of available JDK versions with the selected version marked with a plus ( + ) sign. If the selected JDK is not the desired one, change to the desired JDK as instructed in the shell prompt. Important All software that uses the java command uses the JDK set by alternatives . Changing Java alternatives might impact on the running of other software. 2.1.2. Installing a JDK from a compressed archive You can install a Java Development Kit (JDK) from a compressed archive such as a .zip or .tar file. For a full list of supported JDKs, see JBoss Web Server operating systems and configurations . Procedure If you downloaded the JDK from the vendor's website (Oracle or OpenJDK), use the installation instructions provided by the vendor and set the JAVA_HOME environment variable. If you installed the JDK from a compressed archive, set the JAVA_HOME environment variable for Tomcat: In the bin directory of Tomcat ( JWS_HOME /tomcat/bin ), create a file named setenv.sh . In the setenv.sh file, enter the JAVA_HOME path definition. For example: In the preceding example, replace jre- <version> with jre-11 or jre-17 . 2.1.3. Red Hat Enterprise Linux package requirements Before you install JBoss Web Server on Red Hat Enterprise Linux, you must ensure that your system is compliant with the following package requirements. On Red Hat Enterprise Linux version 8 or 9, if you want to use OpenSSL or Apache Portable Runtime (APR), you must install the openssl and apr packages that Red Hat Enterprise Linux provides. To install the openssl package, enter the following command as the root user: To install the apr package, enter the following command as the root user: You must remove the tomcatjss package before you install the tomcat-native package. The tomcatjss package uses an underlying Network Security Services (NSS) security model rather than the OpenSSL security model. To remove the tomcatjss package, enter the following command as the root user: 2.2. Downloading and extracting archive files for a base release of JBoss Web Server A base release is the initial release of a specific product version (for example, 6.0.0 is the base release of version 6.0). You can download the JBoss Web Server archive files from the Software Downloads page on the Red Hat Customer Portal. Prerequisites You have installed a supported Java Development Kit (JDK) by using the DNF package manager or from a compressed archive . Your system is compliant with Red Hat Enterprise Linux package requirements . Procedure Open a browser and log in to the Red Hat Customer Portal . Click the Downloads tab. From the Product Downloads list, select Red Hat JBoss Web Server . On the Software Downloads page, from the Version drop-down list, select the appropriate JBoss Web Server version. Click Download to the Red Hat JBoss Web Server 6.0.0 Application Server file. The downloaded file is named jws-6.0.0-application-server.zip on your local host. If you also want to download the native JBoss Web Server components for your operating system, click Download to the Red Hat JBoss Web Server 6.0.0 Optional Native Components for <platform> <architecture> file. In this situation, ensure that you select the correct file that matches the platform and architecture for your system. The downloaded file is named jws-6.0.0-optional-native-components- <platform> - <architecture> .zip (for example, jws-6.0.0-optional-native-components-RHEL8-x86_64.zip ). Extract the downloaded archive files to your installation directory. For example: The top-level directory for JBoss Web Server is created when you extract the archive. This document refers to the top-level directory for JBoss Web Server as JWS_HOME . 2.3. Downloading and extracting archive files for JBoss Web Server patch updates If product patch updates are available for the appropriate JBoss Web Server version, you can install the archive files for the latest cumulative patches. You can download the JBoss Web Server archive files from the Software Downloads page on the Red Hat Customer Portal. Important You cannot use cumulative patch updates to install the base ( X.X .0) release of a product version. For example, the installation of a 6.0.2 patch would install the 6.0.1 and 6.0.2 releases but cannot install the base 6.0.0 release. Service pack releases are cumulative. By downloading the latest service pack release, you also install any service pack releases automatically. Prerequisites You have downloaded and extracted the archive files for the base JBoss Web Server release . Procedure Open a browser and log in to the Red Hat Customer Portal . Click the Downloads tab. From the Product Downloads list, select Red Hat JBoss Web Server . On the Software Downloads page, from the Version drop-down list, select the appropriate JBoss Web Server version. Click the Patches tab. Click Download to the latest Red Hat JBoss Web Server 6.0 Update XX Application Server file. The downloaded file is named jws-6.0. x -application-server.zip on your local host. If you also want to download the native JBoss Web Server components for your operating system, click Download to the latest Red Hat JBoss Web Server 6.0 Update XX Optional Native Components for <platform> <architecture> file. In this situation, ensure that you select the correct file that matches the platform and architecture for your system. The downloaded file is named jws-6.0. x -optional-native-components- <platform> - <architecture> .zip (for example, jws-6.0. x -optional-native-components-RHEL8-x86_64.zip ). Extract the downloaded archive files to your installation directory. For example: 2.4. Managing JBoss Web Server by using systemd when installed from an archive file When you install JBoss Web Server from an archive file on Red Hat Enterprise Linux, you can use a system daemon to perform management tasks. Using the JBoss Web Server with a system daemon provides a method of starting the JBoss Web Server services at system startup. The system daemon also provides start, stop and status check functions. On Red Hat Enterprise Linux versions 8 and 9, the default system daemon is systemd . Prerequisites You have installed JBoss Web Server from an archive file . Procedure To determine which system daemon is running, enter the following command: If systemd is running, the following output is displayed: To set up the JBoss Web Server for systemd , run the .postinstall.systemd script as the root user: To control the JBoss Web Server with systemd , you can perform any of the following steps as the root user: To enable the JBoss Web Server services to start at system startup by using systemd : To start the JBoss Web Server by using systemd : Note The SECURITY_MANAGER variable is now deprecated for JBoss Web Server configurations that are based on archive file installations. Consider the following deprecation comment: To stop the JBoss Web Server by using systemd : To verify the status of the JBoss Web Server by using systemd : Note Any user can run the status operation. Additional resources RHEL 8: Configuring basic system settings: Managing system services with systemctl RHEL 9: Configuring basic system settings: Managing system services with systemctl 2.5. JBoss Web Server configuration for managing archive installations from the command line When you install JBoss Web Server from an archive file on Red Hat Enterprise Linux, you can start and stop JBoss Web Server directly from the command line. Before you can run JBoss Web Server from the command line, you must perform the following series of configuration tasks: Set the JAVA_HOME environment variable for Tomcat. Create a tomcat user and its parent group. Grant the tomcat user access to JBoss Web Server. Note When you manage JBoss Web Server by using a system daemon rather than from the command line, the .postinstall.systemd script performs these configuration steps automatically. 2.5.1. Setting the JAVA_HOME environment variable for Apache Tomcat Before you run JBoss Web Server from the command line for the first time, you must set the JAVA_HOME environment variable for Apache Tomcat. Prerequisites You have installed JBoss Web Server from an archive file . Procedure On a command line, go to the JWS_HOME /tomcat/bin directory. Create a file named setenv.sh . In the setenv.sh file, enter the JAVA_HOME path definition. For example: 2.5.2. Creating a Tomcat user and group Before you run JBoss Web Server from the command line for the first time, you must create a tomcat user account and user group to enable simple and secure user management. On Red Hat Enterprise Linux, the user identifer (UID) for the tomcat user and the group identifier (GID) for the tomcat group both have a reserved value of 53 . Note You must perform all steps in this procedure as the root user. Prerequisites You have set the JAVA_HOME environment variable for Tomcat . Procedure On a command line, go to the JWS_HOME directory. Create the tomcat user group: Create the tomcat user in the tomcat user group: The preceding commands set both the UID and the GID to 53 . If you subsequently want to change the UID and GID values, see Changing the UID and GID for the tomcat user and group . 2.5.3. Granting the Tomcat user access to JBoss Web Server Before you run JBoss Web Server from the command line for the first time, you must grant the tomcat user access to JBoss Web Server by assigning ownership of the Tomcat directories to the tomcat user. Note You must perform all steps in this procedure as the root user. Prerequisites You have created a tomcat user and its parent group . Procedure Go to the JWS_HOME directory. Assign ownership of the Tomcat directories to the tomcat user: Ensure that the tomcat user has execute permissions for all parent directories: Verification Verify that the tomcat user is the owner of the directory: 2.6. Starting JBoss Web Server from the command line when installed from an archive file When you install JBoss Web Server from an archive file on Red Hat Enterprise Linux, you can start JBoss Web Server directly from the command line. Prerequisites You have set the JAVA_HOME environment variable for Tomcat . You have created a tomcat user and its parent group . You have granted the tomcat user access to JBoss Web Server . Procedure Enter the following command as the tomcat user: 2.7. Stopping JBoss Web Server from the command line when installed from an archive file When you install JBoss Web Server from an archive file on Red Hat Enterprise Linux, you can stop JBoss Web Server directly from the command line. Prerequisites You have started JBoss Web Server from the command line . Procedure Enter the following command as the tomcat user: 2.8. SELinux policies for JBoss Web Server You can use Security-Enhanced Linux (SELinux) policies to define access controls for JBoss Web Server. These policies are a set of rules that determine access rights to the product. 2.8.1. SELinux policy information for jws6-tomcat The SELinux security model is enforced by the kernel and ensures that applications have limited access to resources such as file system locations and ports. SELinux policies ensure that any errant processes that are compromised or poorly configured are restricted or prevented from running. The jws6-tomcat-selinux packages in your JBoss Web Server installation provide a jws6_tomcat policy. The following table contains information about the supplied SELinux policy. Table 2.1. RPMs and default SELinux policies Name Port Information Policy Information jws6_tomcat Four ports in http_port_t (TCP ports 8080 , 8005 , 8009 , and 8443 ) to allow the tomcat process to use them The jws6_tomcat policy is installed, which sets the appropriate SELinux domain for the process when Tomcat executes. It also sets the appropriate contexts to allow Tomcat to write to the following directories: /var/opt/rh/jws6/lib/tomcat /var/opt/rh/jws6/log/tomcat /var/opt/rh/jws6/cache/tomcat /var/opt/rh/jws6/run/tomcat.pid Additional resources RHEL 8: Using SELinux RHEL 9: Using SELinux 2.8.2. Installing SELinux policies for a JBoss Web Server archive installation In this release, the archive packages provide SELinux policies. The tomcat folder of the jws-6.0.0-application-server- <platform> - <architecture> .zip archive includes the .postinstall.selinux file. If required, you can run the .postinstall.selinux script. Procedure Install the selinux-policy-devel package: Run the .postinstall.selinux script: Add access permissions to the required ports for JBoss Web Server: Note The JBoss Web Server has access to ports 8080 , 8009 , 8443 and 8005 on Red Hat Enterprise Linux systems. When additional ports are required for JBoss Web Server, use the preceding semanage command to provide the necessary permissions, and replace <port> with the required port. Start Tomcat: Check the context of the running process expecting jws6_tomcat : Verify the contexts of the Tomcat directories. For example: Note By default, the SElinux policy that JBoss Web Server provides is not active and the Tomcat processes run in the unconfined_java_t domain. This domain does not confine the processes. If you choose not to enable the SELinux policy that is provided, you can take the following security measures: Restrict file access for the tomcat user, so that the tomcat user only has access to the files and directories that are necessary for the JBoss Web Server runtime. Do not run Tomcat as the root user. Note When JBoss Web Server is installed from an archive file, Red Hat does not officially support the use of network file sharing (NFS). If you want your JBoss Web Server installation to use an NFS-mounted file system, you are responsible for ensuring that SELinux policies are modified correctly to support this type of deployment. 2.9. Changing the UID and GID for the tomcat user and group On Red Hat Enterprise Linux, the user identifer (UID) for the tomcat user and the group identifier (GID) for the tomcat group both have a reserved value of 53 . Depending on your setup requirements, you can change the UID and GID for the tomcat user and group to some other value. Warning To avoid SELinux conflicts, use UID and GID values that are less than 500. If SELinux is set to enforcing mode, UID and GID values greater than 500 might cause unexpected issues. Prerequisites You have created a tomcat user account and group . Procedure If JBoss Web Server is already running, stop JBoss Web Server as the tomcat user. For more information, see Stopping JBoss Web Server from the command line when installed from an archive file . To view the current UID and GID for the tomcat user and group, enter the following command as the root user: The preceding command displays the user account and group details. For example: To assign a new GID to the tomcat group, enter the following command as the root user: For example: To assign a new UID to the tomcat user, enter the following command as the root user: For example: To reassign file and directory permissions to the new UID, enter the following command as the root user: In the preceding command, replace <original_uid> with the old UID and replace <new_uid> with the new UID. For example, to reassign file and directory permissions from UID 53 to UID 401 , enter the following command: To reassign file and directory permissions to the new GID, enter the following command as the root user: In the preceding command, replace <original_gid> with the old GID and replace <new_gid> with the new GID. For example, to reassign file and directory permissions from GID 53 to GID 410 , enter the following command: To restart JBoss Web Server as the tomcat user, see Starting JBoss Web Server from the command line when installed from an archive file . Additional resources What are the reserved UIDs/GIDs in Red Hat Enterprise Linux? | [
"dnf install java- <version> -openjdk-headless",
"alternatives --config java",
"cat JWS_HOME /tomcat/bin/setenv.sh export JAVA_HOME=/usr/lib/jvm/jre- <version> -openjdk.x86_64",
"dnf install openssl",
"dnf install apr",
"dnf remove tomcatjss",
"unzip jws-6.0.0-application-server.zip -d /opt/ unzip -o jws-6.0.0-optional-native-compoonents- <platform> - <architecture> .zip -d /opt/",
"unzip jws-6.0. x -application-server.zip -d /opt/ unzip -o jws-6.0. x -optional-native-compoonents- <platform> - <architecture> .zip -d /opt/",
"ps -p 1 -o comm=",
"systemd",
"cd JWS_HOME /tomcat sh .postinstall.systemd",
"systemctl enable jws6-tomcat.service",
"systemctl start jws6-tomcat.service",
"SECURITY_MANAGER has been deprecated. To run tomcat under the Java Security Manager use: JAVA_OPTS=\"-Djava.security.manager -Djava.security.policy==\\\"USDCATALINA_BASE/conf/\"catalina.policy\\\"\"\"",
"systemctl stop jws6-tomcat.service",
"systemctl status jws6-tomcat.service",
"export JAVA_HOME=/usr/lib/jvm/jre-11-openjdk.x86_64",
"groupadd -g 53 -r tomcat",
"useradd -c \"tomcat\" -u 53 -g tomcat -s /sbin/nologin -r tomcat",
"chown -R tomcat:tomcat tomcat/",
"chmod -R u+X tomcat/",
"ls -l",
"sh JWS_HOME /tomcat/bin/startup.sh",
"sh JWS_HOME /tomcat/bin/shutdown.sh",
"dnf install -y selinux-policy-devel",
"cd <JWS_home> /tomcat/ sh .postinstall.selinux",
"semanage port -a -t http_port_t -p tcp <port>",
"<JWS_home> /tomcat/bin/startup.sh",
"ps -eo pid,user,label,args | grep jws6_tomcat | head -n1",
"ls -lZ <JWS_home> /tomcat/logs/",
"id tomcat",
"uid=53(tomcat) gid=53(tomcat) groups=53(tomcat)",
"groupmod -g <new_gid> tomcat",
"groupmod -g 410 tomcat",
"usermod -u <new_uid> -g <new_gid> tomcat",
"usermod -u 401 -g 410 tomcat",
"find / -not -path '/proc*' -uid <original_uid> | perl -e 'USDug = @ARGV[0]; foreach USDfn (<STDIN>) { chomp(USDfn);USDm = (stat(USDfn))[2];chown(USDug,-1,USDfn);chmod(USDm,USDfn)}' <new_uid>",
"find / -not -path '/proc*' -uid 53 | perl -e 'USDug = @ARGV[0]; foreach USDfn (<STDIN>) { chomp(USDfn);USDm = (stat(USDfn))[2];chown(USDug,-1,USDfn);chmod(USDm,USDfn)}' 401",
"find / -not -path '/proc*' -gid <original_gid> | perl -e 'USDug = @ARGV[0]; foreach USDfn (<STDIN>) { chomp(USDfn);USDm = (stat(USDfn))[2];chown(-1,USDug,USDfn);chmod(USDm,USDfn)}' <new_gid>",
"find / -not -path '/proc*' -gid 53 | perl -e 'USDug = @ARGV[0]; foreach USDfn (<STDIN>) { chomp(USDfn);USDm = (stat(USDfn))[2];chown(-1,USDug,USDfn);chmod(USDm,USDfn)}' 410"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/assembly_installing-jws-on-rhel-from-archive-files_jboss_web_server_installation_guide |
Chapter 5. Adding additional Certificate Authorities to the Red Hat Quay container | Chapter 5. Adding additional Certificate Authorities to the Red Hat Quay container The extra_ca_certs directory is the directory where additional Certificate Authorities (CAs) can be stored to extend the set of trusted certificates. These certificates are used by Red Hat Quay to verify SSL/TLS connections with external services. When deploying Red Hat Quay, you can place the necessary CAs in this directory to ensure that connections to services like LDAP, OIDC, and storage systems are properly secured and validated. For standalone Red Hat Quay deployments, you must create this directory and copy the additional CA certificates into that directory. Prerequisites You have a CA for the desired service. Procedure View the certificate to be added to the container by entering the following command: USD cat storage.crt Example output -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE----- Create the extra_ca_certs in the /config folder of your Red Hat Quay directory by entering the following command: USD mkdir -p /path/to/quay_config_folder/extra_ca_certs Copy the CA file to the extra_ca_certs folder. For example: USD cp storage.crt /path/to/quay_config_folder/extra_ca_certs/ Ensure that the storage.crt file exists within the extra_ca_certs folder by entering the following command: USD tree /path/to/quay_config_folder/extra_ca_certs Example output /path/to/quay_config_folder/extra_ca_certs ├── storage.crt---- Obtain the CONTAINER ID of your Quay consider by entering the following command: USD podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:{productminv} "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller Restart the container by entering the following command USD podman restart 5a3e82c4a75f Confirm that the certificate was copied into the container namespace by running the following command: USD podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem Example output -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE----- 5.1. Adding custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes When deployed on Kubernetes, Red Hat Quay mounts in a secret as a volume to store config assets. Currently, this breaks the upload certificate function of the superuser panel. As a temporary workaround, base64 encoded certificates can be added to the secret after Red Hat Quay has been deployed. Use the following procedure to add custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes. Prerequisites Red Hat Quay has been deployed. You have a custom ca.crt file. Procedure Base64 encode the contents of an SSL/TLS certificate by entering the following command: USD cat ca.crt | base64 -w 0 Example output ...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Enter the following kubectl command to edit the quay-enterprise-config-secret file: USD kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret Add an entry for the certificate and paste the full base64 encoded stringer under the entry. For example: custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Use the kubectl delete command to remove all Red Hat Quay pods. For example: USD kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data. | [
"cat storage.crt",
"-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV -----END CERTIFICATE-----",
"mkdir -p /path/to/quay_config_folder/extra_ca_certs",
"cp storage.crt /path/to/quay_config_folder/extra_ca_certs/",
"tree /path/to/quay_config_folder/extra_ca_certs",
"/path/to/quay_config_folder/extra_ca_certs ├── storage.crt----",
"podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:{productminv} \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller",
"podman restart 5a3e82c4a75f",
"podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem",
"-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV -----END CERTIFICATE-----",
"cat ca.crt | base64 -w 0",
"...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret",
"custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/config-extra-ca-certs-standalone |
7.4. RHBA-2014:1577 - new packages: glib-networking | 7.4. RHBA-2014:1577 - new packages: glib-networking New glib-networking packages are now available for Red Hat Enterprise Linux 6. The glib-networking packages provide modules that extend the networking support in Glib. In particular, the packages contain a libproxy-based implementation of the GProxyResolver class type and a gnutls-based implementation of the GTlsConnection class type. This enhancement update adds the glib-networking packages to Red Hat Enterprise Linux 6. (BZ# 1101418 , BZ# 1119162 ) The glib-networking packages are installed automatically as a dependency of the libsoup packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhba-2014-1577 |
Chapter 4. Knative CLI for use with OpenShift Serverless | Chapter 4. Knative CLI for use with OpenShift Serverless The Knative ( kn ) CLI enables simple interaction with Knative components on OpenShift Dedicated. 4.1. Key features The Knative ( kn ) CLI is designed to make serverless computing tasks simple and concise. Key features of the Knative CLI include: Deploy serverless applications from the command line. Manage features of Knative Serving, such as services, revisions, and traffic-splitting. Create and manage Knative Eventing components, such as event sources and triggers. Create sink bindings to connect existing Kubernetes applications and Knative services. Extend the Knative CLI with flexible plugin architecture, similar to the kubectl CLI. Configure autoscaling parameters for Knative services. Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies. 4.2. Installing the Knative CLI See Installing the Knative CLI . | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/kn-cli-tools |
Chapter 168. Jasypt component | Chapter 168. Jasypt component Available as of Camel 2.5 Jasypt is a simplified encryption library which makes encryption and decryption easy. Camel integrates with Jasypt to allow sensitive information in Properties files to be encrypted. By dropping camel-jasypt on the classpath those encrypted values will automatically be decrypted on-the-fly by Camel. This ensures that human eyes can't easily spot sensitive information such as usernames and passwords. If you are using Maven, you need to add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jasypt</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> If you are using an Apache Karaf container, you need to add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.karaf.jaas</groupId> <artifactId>org.apache.karaf.jaas.jasypt</artifactId> <version>x.x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 168.1. Tooling The Jasypt component provides a little command line tooling to encrypt or decrypt values. The console output the syntax and which options it provides: Apache Camel Jasypt takes the following options -h or -help = Displays the help screen -c or -command <command> = Command either encrypt or decrypt -p or -password <password> = Password to use -i or -input <input> = Text to encrypt or decrypt -a or -algorithm <algorithm> = Optional algorithm to use For example to encrypt the value tiger you run with the following parameters. In the apache camel kit, you cd into the lib folder and run the following java cmd, where <CAMEL_HOME> is where you have downloaded and extract the Camel distribution. USD cd <CAMEL_HOME>/lib USD java -jar camel-jasypt-2.5.0.jar -c encrypt -p secret -i tiger Which outputs the following result Encrypted text: qaEEacuW7BUti8LcMgyjKw== This means the encrypted representation qaEEacuW7BUti8LcMgyjKw== can be decrypted back to tiger if you know the master password which was secret . If you run the tool again then the encrypted value will return a different result. But decrypting the value will always return the correct original value. So you can test it by running the tooling using the following parameters: USD cd <CAMEL_HOME>/lib USD java -jar camel-jasypt-2.5.0.jar -c decrypt -p secret -i qaEEacuW7BUti8LcMgyjKw== Which outputs the following result: Decrypted text: tiger The idea is then to use those encrypted values in your Properties files. Notice how the password value is encrypted and the value has the tokens surrounding ENC(value here) Tip When running jasypt tooling, if you come across java.lang.NoClassDefFoundError: org/jasypt/encryption/pbe/StandardPBEStringEncryptor this means you have to include jasypt7.13.jar in your classpath. Example of adding jar to classpath may be copying jasypt7.13.jar to USDJAVA_HOME\jre\lib\ext if you are going to run as java -jar ... . The latter may be adding jasypt7.13.jar to classpath using -cp , in that case you should provide main class to execute as eg: java -cp jasypt-1.9.2.jar:camel-jasypt-2.18.2.jar org.apache.camel.component.jasypt.Main -c encrypt -p secret -i tiger 168.2. URI Options The options below are exclusive for the Jasypt component. Name Default Value Type Description password null String Specifies the master password to use for decrypting. This option is mandatory. See below for more details. algorithm null String Name of an optional algorithm to use. 168.3. Protecting the master password The master password used by Jasypt must be provided, so that it's capable of decrypting the values. However having this master password out in the open may not be an ideal solution. Therefore you could for example provide it as a JVM system property or as a OS environment setting. If you decide to do so then the password option supports prefixes which dictates this. sysenv: means to lookup the OS system environment with the given key. sys: means to lookup a JVM system property. For example you could provided the password before you start the application USD export CAMEL_ENCRYPTION_PASSWORD=secret Then start the application, such as running the start script. When the application is up and running you can unset the environment USD unset CAMEL_ENCRYPTION_PASSWORD The password option is then a matter of defining as follows: password=sysenv:CAMEL_ENCRYPTION_PASSWORD . 168.4. Example with Java DSL In Java DSL you need to configure Jasypt as a JasyptPropertiesParser instance and set it on the Properties component as show below: The properties file myproperties.properties then contain the encrypted value, such as shown below. Notice how the password value is encrypted and the value has the tokens surrounding ENC(value here) 168.5. Example with Spring XML In Spring XML you need to configure the JasyptPropertiesParser which is shown below. Then the Camel Properties component is told to use jasypt as the properties parser, which means Jasypt has its chance to decrypt values looked up in the properties. <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <property name="password" value="secret"/> </bean> <!-- define the camel properties component --> <bean id="properties" class="org.apache.camel.component.properties.PropertiesComponent"> <!-- the properties file is in the classpath --> <property name="location" value="classpath:org/apache/camel/component/jasypt/myproperties.properties"/> <!-- and let it leverage the jasypt parser --> <property name="propertiesParser" ref="jasypt"/> </bean> The Properties component can also be inlined inside the <camelContext> tag which is shown below. Notice how we use the propertiesParserRef attribute to refer to Jasypt. <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <!-- password is mandatory, you can prefix it with sysenv: or sys: to indicate it should use an OS environment or JVM system property value, so you dont have the master password defined here --> <property name="password" value="secret"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id="properties" location="classpath:org/apache/camel/component/jasypt/myproperties.properties" propertiesParserRef="jasypt"/> <route> <from uri="direct:start"/> <to uri="{{cool.result}}"/> </route> </camelContext> 168.6. Example with Blueprint XML In Blueprint XML you need to configure the JasyptPropertiesParser which is shown below. Then the Camel Properties component is told to use jasypt as the properties parser, which means Jasypt has its chance to decrypt values looked up in the properties. <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <cm:property-placeholder id="myblue" persistent-id="mypersistent"> <!-- list some properties for this test --> <cm:default-properties> <cm:property name="cool.result" value="mock:{{cool.password}}"/> <cm:property name="cool.password" value="ENC(bsW9uV37gQ0QHFu7KO03Ww==)"/> </cm:default-properties> </cm:property-placeholder> <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <property name="password" value="secret"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id="properties" location="blueprint:myblue" propertiesParserRef="jasypt"/> <route> <from uri="direct:start"/> <to uri="{{cool.result}}"/> </route> </camelContext> </blueprint> The Properties component can also be inlined inside the <camelContext> tag which is shown below. Notice how we use the propertiesParserRef attribute to refer to Jasypt. <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <property name="password" value="secret"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id="properties" location="classpath:org/apache/camel/component/jasypt/myproperties.properties" propertiesParserRef="jasypt"/> <route> <from uri="direct:start"/> <to uri="{{cool.result}}"/> </route> </camelContext> </blueprint> 168.7. See Also Security Properties Encrypted passwords in ActiveMQ - ActiveMQ has a similar feature as this camel-jasypt component | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jasypt</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"<dependency> <groupId>org.apache.karaf.jaas</groupId> <artifactId>org.apache.karaf.jaas.jasypt</artifactId> <version>x.x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"Apache Camel Jasypt takes the following options -h or -help = Displays the help screen -c or -command <command> = Command either encrypt or decrypt -p or -password <password> = Password to use -i or -input <input> = Text to encrypt or decrypt -a or -algorithm <algorithm> = Optional algorithm to use",
"cd <CAMEL_HOME>/lib java -jar camel-jasypt-2.5.0.jar -c encrypt -p secret -i tiger",
"Encrypted text: qaEEacuW7BUti8LcMgyjKw==",
"cd <CAMEL_HOME>/lib java -jar camel-jasypt-2.5.0.jar -c decrypt -p secret -i qaEEacuW7BUti8LcMgyjKw==",
"Decrypted text: tiger",
"export CAMEL_ENCRYPTION_PASSWORD=secret",
"unset CAMEL_ENCRYPTION_PASSWORD",
"<!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <property name=\"password\" value=\"secret\"/> </bean> <!-- define the camel properties component --> <bean id=\"properties\" class=\"org.apache.camel.component.properties.PropertiesComponent\"> <!-- the properties file is in the classpath --> <property name=\"location\" value=\"classpath:org/apache/camel/component/jasypt/myproperties.properties\"/> <!-- and let it leverage the jasypt parser --> <property name=\"propertiesParser\" ref=\"jasypt\"/> </bean>",
"<!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <!-- password is mandatory, you can prefix it with sysenv: or sys: to indicate it should use an OS environment or JVM system property value, so you dont have the master password defined here --> <property name=\"password\" value=\"secret\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id=\"properties\" location=\"classpath:org/apache/camel/component/jasypt/myproperties.properties\" propertiesParserRef=\"jasypt\"/> <route> <from uri=\"direct:start\"/> <to uri=\"{{cool.result}}\"/> </route> </camelContext>",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <cm:property-placeholder id=\"myblue\" persistent-id=\"mypersistent\"> <!-- list some properties for this test --> <cm:default-properties> <cm:property name=\"cool.result\" value=\"mock:{{cool.password}}\"/> <cm:property name=\"cool.password\" value=\"ENC(bsW9uV37gQ0QHFu7KO03Ww==)\"/> </cm:default-properties> </cm:property-placeholder> <!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <property name=\"password\" value=\"secret\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id=\"properties\" location=\"blueprint:myblue\" propertiesParserRef=\"jasypt\"/> <route> <from uri=\"direct:start\"/> <to uri=\"{{cool.result}}\"/> </route> </camelContext> </blueprint>",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <property name=\"password\" value=\"secret\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id=\"properties\" location=\"classpath:org/apache/camel/component/jasypt/myproperties.properties\" propertiesParserRef=\"jasypt\"/> <route> <from uri=\"direct:start\"/> <to uri=\"{{cool.result}}\"/> </route> </camelContext> </blueprint>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jasypt_component |
Chapter 1. High Availability Add-On Overview | Chapter 1. High Availability Add-On Overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. The following sections provide a high-level description of the components and functions of the High Availability Add-On: Section 1.1, "Cluster Basics" Section 1.2, "High Availability Add-On Introduction" Section 1.4, "Pacemaker Architecture Components" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2 (part of the Resilient Storage Add-On). High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its High Availability Service Management component, Pacemaker . Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Load balancing is available with the Load Balancer Add-On. High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. Additionally, the Red Hat Enterprise Linux High Availability Add-On contains support for configuring and managing high availability servers only . It does not support high-performance clusters. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/ch-introduction-HAAO |
Chapter 6. Proof of concept deployment using SSL/TLS certificates | Chapter 6. Proof of concept deployment using SSL/TLS certificates Use the following sections to configure a proof of concept Red Hat Quay deployment with SSL/TLS certificates. 6.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . Note The following examples assume that you have configured the server hostname quay-server.example.com using DNS or another naming mechanism, such as adding an entry in your /etc/hosts file. For more information, see "Configuring port mapping for Red Hat Quay". 6.1.1. Creating a Certificate Authority Use the following procedure to create a Certificate Authority (CA). Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com 6.1.1.1. Signing the certificate Use the following procedure to sign the certificate. Procedure Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Create a configuration file openssl.cnf , specifying the server hostname, for example: openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 6.2. Configuring SSL/TLS SSL/TLS can be configured using either the command-line interface (CLI) or the Red Hat Quay registry UI. Use one of the following procedures to configure SSL/TLS. 6.2.1. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL/TLS using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed a certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 6.2.2. Configuring SSL/TLS using the command line interface Use the following procedure to configure SSL/TLS using the CLI. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key USDQUAY/config Change into the USDQUAY/config directory by entering the following command: USD cd USDQUAY/config Edit the config.yaml file and specify that you want Red Hat Quay to handle TLS/SSL: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop quay Restart the registry by entering the following command: 6.3. Testing the SSL/TLS configuration Your SSL/TLS configuration can be tested using either the command-line interface (CLI) or the Red Hat Quay registry UI. Use one of the following procedures to test your SSL/TLS configuration. 6.3.1. Testing the SSL/TLS configuration using the CLI Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 6.3.2. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 6.4. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 6.5. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates . | [
"openssl genrsa -out rootCA.key 2048",
"openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com",
"openssl genrsa -out ssl.key 2048",
"openssl req -new -key ssl.key -out ssl.csr",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112",
"openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.10.9 config secret",
"sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9",
"cp ~/ssl.cert ~/ssl.key USDQUAY/config",
"cd USDQUAY/config",
"SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https",
"cat rootCA.pem >> ssl.cert",
"sudo podman stop quay",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9",
"sudo podman login quay-server.example.com",
"Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority",
"sudo podman login --tls-verify=false quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt",
"sudo podman login quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"trust list | grep quay label: quay-server.example.com",
"sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem",
"sudo update-ca-trust extract",
"trust list | grep quay"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/proof_of_concept_-_deploying_red_hat_quay/advanced-quay-poc-deployment |
F.2. About Consistency Guarantee | F.2. About Consistency Guarantee Despite the locking of a single owner instead of all owners, Red Hat JBoss Data Grid's consistency guarantee remains intact. Consider the following situation: If Key K is hashed to nodes {A,B} and transaction TX1 acquires a lock for K on, for example, node A and If another cache access occurs on node B , or any other node, and TX2 attempts to lock K , this access attempt fails with a timeout because the transaction TX1 already holds a lock on K. This lock acquisition attempt always fails because the lock for key K is always deterministically acquired on the same node of the cluster, irrespective of the transaction's origin. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/consistency_guarantee |
Chapter 53. EntityTopicOperatorSpec schema reference | Chapter 53. EntityTopicOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityTopicOperatorSpec schema properties Configures the Topic Operator. 53.1. Logging The Topic Operator has a configurable logger: rootLogger.level The Topic Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.topicOperator field of the Kafka resource Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: inline loggers: rootLogger.level: INFO logger.top.name: io.strimzi.operator.topic 1 logger.top.level: DEBUG 2 logger.toc.name: io.strimzi.operator.topic.TopicOperator 3 logger.toc.level: TRACE 4 logger.clients.level: DEBUG 5 # ... 1 Creates a logger for the topic package. 2 Sets the logging level for the topic package. 3 Creates a logger for the TopicOperator class. 4 Sets the logging level for the TopicOperator class. 5 Changes the logging level for the default clients logger. The clients logger is part of the logging configuration provided with Streams for Apache Kafka. By default, it is set to INFO . Note When investigating an issue with the operator, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 53.2. EntityTopicOperatorSpec schema properties Property Property type Description watchedNamespace string The namespace the Topic Operator should watch. image string The image to use for the Topic Operator. reconciliationIntervalSeconds integer The reconciliationIntervalSeconds property has been deprecated, and should now be configured using .spec.entityOperator.topicOperator.reconciliationIntervalMs . Interval between periodic reconciliations in seconds. Ignored if reconciliationIntervalMs is set. reconciliationIntervalMs integer Interval between periodic reconciliations in milliseconds. zookeeperSessionTimeoutSeconds integer The zookeeperSessionTimeoutSeconds property has been deprecated. This property is not used anymore in Streams for Apache Kafka 2.8 and it is ignored. Timeout for the ZooKeeper session. startupProbe Probe Pod startup checking. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. resources ResourceRequirements CPU and memory resources to reserve. topicMetadataMaxAttempts integer The topicMetadataMaxAttempts property has been deprecated. This property is not used anymore in Streams for Apache Kafka 2.8 and it is ignored. The number of attempts at getting topic metadata. logging InlineLogging , ExternalLogging Logging configuration. jvmOptions JvmOptions JVM Options for pods. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: inline loggers: rootLogger.level: INFO logger.top.name: io.strimzi.operator.topic 1 logger.top.level: DEBUG 2 logger.toc.name: io.strimzi.operator.topic.TopicOperator 3 logger.toc.level: TRACE 4 logger.clients.level: DEBUG 5 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-entitytopicoperatorspec-reference |
Part I. Notable Bug Fixes | Part I. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 6.9 that have a significant impact on users. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/part-red_hat_enterprise_linux-6.9_technical_notes-notable_bug_fixes |
Chapter 6. Support for Red Hat Ansible Automation Platform on Microsoft Azure | Chapter 6. Support for Red Hat Ansible Automation Platform on Microsoft Azure Ansible Automation Platform on Microsoft Azure is a managed application, supported and maintained by Red Hat. Due to the architecture of the application and the deployment strategy in Azure, there are some situations where customizing and changing some aspects of the configuration could lead to a change in the responsibilities of some components. Azure Virtual Appliance Routing with Ansible Automation Platform on Microsoft Azure As an Ansible Automation Platform on Microsoft Azure user, you can configure the Ansible Automation Platform network to peer your own network. By doing so, you can grant access from the Ansible Automation Platform instance to all the assets associated with your own network that you want to manage. Also, you can route all the Ansible Automation Platform traffic to your own Virtual Network Appliances to control, audit, or block traffic from the Ansible Automation Platform instance to the internet. To do this, you must allowlist the URL for Ansible Automation Platform to work properly. For more information about Azure Virtual Appliance Routing, see the Azure Virtual Appliance Routing with Ansible Automation Platform on Azure article on the Red Hat customer portal. Private DNS Zones Ansible Automation Platform on Microsoft Azure uses Azure's managed DNS services when deployed. To use private DNS records that cannot be resolved publicly, you can either use Azure Private DNS zones that are peered to the managed application VNET, or you can make a submit request to Red Hat to submit DNS zones that must be forwarded to a customer-managed private DNS server. A limitation of Private DNS zones is that only one instance of a given zone may be linked to a Virtual Network. Attempting to link zones that match the names of Private DNS zones in the managed resource group causes conflicts. Microsoft recommends consolidating DNS records into a single zone to work around this limitation. You can replicate the records from the zones in the managed resource group into your own instance of the Private DNS zone. You can then unlink Private DNS zones in the managed resource group from the Virtual Network and replace it with your own instance of the Private DNS zone. Failure to properly maintain the records in the Private DNS zone can prevent the managed application from operating. The Azure Kubernetes Service (AKS) Private DNS zone cannot be customer managed and still allow Red Hat to update or upgrade the managed AKS that is a part of this offering. To allow Red Hat to upgrade the customer AKS to the latest version during the maintenance windows, do not unlink the <GUID>.privatelink.<region>.azmk8s.io Private DNS zone. For more information on this limitation, see "CreateOrUpdateVirtualNetworkLinkFailed" error when updating or upgrading an AKS cluster in the Microsoft Azure documentation. To work around this limitation, Red Hat allows you to manage A and CNAME records in the Private DNS zones in the managed resource group. Any records that you put in the Private DNS zones in the managed resource group are visible to the Red Hat SREs. If you decide to use the Private DNS zones in the managed resource group, you are responsible for updating them with the records you need. Customer supplied records are not backed up as a part of the disaster recovery process. Removing any Azure supplied records can cause network connectivity issues with Ansible Automation Platform on Microsoft Azure. For more information about working with Private DNS zones, see Private DNS with Red Hat Ansible Automation Platform on Microsoft Azure on the Red Hat customer portal. Microsoft Azure Policy In some situations, using Azure Policy to enforce, for example, tagging rules and conventions, can adversely affect the Resource Group where the components of Ansible Automation Platform on Microsoft Azure reside. The enforcement of Azure Policy could prevent changes, impact operations, or block deployment of new components in the Resource Group. These situations are identified by Red Hat during maintenance or daily operations. You must exclude the enforcement of Azure Policy, for example by using exceptions, on resources associated with the managed application. For more information about working with Azure Policy, see the Azure Policy and Ansible on Azure article on the Red Hat customer portal. 6.1. Limited support status Customers may implement Azure infrastructure changes or policies that negatively affect the functionality of the service and Red Hat's ability to monitor and service it. In such scenarios, the deployment can transition into a limited support status. A deployment may move to a limited support status for many reasons, including the following scenarios: Inactive Ansible subscriptions Red Hat issues subscription entitlements through the application deployment process. The entitlement expires one year after it is issued. Customers are emailed prior to expiration to renew the entitlement. This process issues a new entitlement for the year and can be imported into Ansible Automation Platform. Failure to renew When an entitlement expires, customers can continue to use Ansible Automation Platform on Microsoft Azure. However, Red Hat support requires a valid entitlement before assisting with support issues. Changes to customized policies Ansible Automation Platform on Microsoft Azure runs on infrastructure within a customer's Azure tenant. This means that customer Azure policies can affect the deployment and function of the platform. Given the flexibility of Azure policy definitions, it is impossible to list all policies that can cause infrastructure or operational issues with the platform. When those events happen, the Red Hat SRE team can help identify the policy causing the issue, and suggested remediation. That remediation may require customer policy changes in order for the managed application to function correctly. Failure to remediate When a policy conflict arises, the Red Hat team reaches out to the customer to remediate. Some of these changes are time sensitive and can impact the platform's operational function. The Red Hat Site Reliability Engineering (SRE) team will pause maintenance and upgrades until the customer resolves the policy issues. Changes to firewall and network configurations When configuring egress routes from Ansible Automation Platform on Microsoft Azure, customers are required to route traffic to a set of domains on the public internet used for the deployment, monitoring and maintenance of the platform. Failure to set up The Ansible Automation Platform on Managed Azure will be in a state where it cannot be monitored or serviced by the Red Hat SRE team. Changes to quota limits A prerequisite for Ansible Automation Platform on Microsoft Azure requires adequate capacity to deploy the managed application in the selected region. Microsoft imposes restrictions on limits (CPU) in each region. It is expected that the customers set up their infrastructure with the specifications as listed in the Product Documentation. Failure to remediate As the Red Hat SRE team perform regular maintenance and improvements, the quota limitations in the region could prevent them to manage the deployment consistently and can eventually result in an inability to manage the offering. Incorrect CIDR ranges During initial deployment, customers can configure networking address range (CIDR block) for the VNet the Ansible Automation Platform uses. After configuration, when you have created the network, you cannot change CIDR blocks. To male a change you must redeploy the managed Ansible Automation Platform application with the configurations specified in the documentation. Failure to remediate While the Azure user experiences allow customers to set CIDR blocks of choice, the use of the smaller CIDR ranges smaller than our guidelines can create the circumstance that your deployment is in limited supported state where the platform is unable to scale for both automation and management workloads. | null | https://docs.redhat.com/en/documentation/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_on_microsoft_azure_guide/assembly-azure-support |
5.4.16.2. Converting a Linear Device to a RAID Device | 5.4.16.2. Converting a Linear Device to a RAID Device You can convert an existing linear logical volume to a RAID device by using the --type argument of the lvconvert command. The following command converts the linear logical volume my_lv in volume group my_vg to a 2-way RAID1 array. Since RAID logical volumes are composed of metadata and data subvolume pairs, when you convert a linear device to a RAID1 array, a new metadata subvolume is created and associated with the original logical volume on (one of) the same physical volumes that the linear volume is on. The additional images are added in metadata/data subvolume pairs. For example, if the original device is as follows: After conversion to a 2-way RAID1 array the device contains the following data and metadata subvolume pairs: If the metadata image that pairs with the original logical volume cannot be placed on the same physical volume, the lvconvert will fail. | [
"lvconvert --type raid1 -m 1 my_vg/my_lv",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)",
"lvconvert --type raid1 -m 1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/linear-to-raid |
Chapter 11. Troubleshooting | Chapter 11. Troubleshooting The following chapter describes what happens when SELinux denies access; the top three causes of problems; where to find information about correct labeling; analyzing SELinux denials; and creating custom policy modules with audit2allow . 11.1. What Happens when Access is Denied SELinux decisions, such as allowing or disallowing access, are cached. This cache is known as the Access Vector Cache (AVC). Denial messages are logged when SELinux denies access. These denials are also known as "AVC denials", and are logged to a different location, depending on which daemons are running: Daemon Log Location auditd on /var/log/audit/audit.log auditd off; rsyslogd on /var/log/messages setroubleshootd, rsyslogd, and auditd on /var/log/audit/audit.log . Easier-to-read denial messages also sent to /var/log/messages If you are running the X Window System, have the setroubleshoot and setroubleshoot-server packages installed, and the setroubleshootd and auditd daemons are running, a warning is displayed when access is denied by SELinux: Clicking on Show presents a detailed analysis of why SELinux denied access, and a possible solution for allowing access. If you are not running the X Window System, it is less obvious when access is denied by SELinux. For example, users browsing your website may receive an error similar to the following: For these situations, if DAC rules (standard Linux permissions) allow access, check /var/log/messages and /var/log/audit/audit.log for "SELinux is preventing" and "denied" errors respectively. This can be done by running the following commands as the root user: | [
"Forbidden You don't have permission to access file name on this server",
"~]# grep \"SELinux is preventing\" /var/log/messages",
"~]# grep \"denied\" /var/log/audit/audit.log"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-Security-Enhanced_Linux-Troubleshooting |
1.5. Configuring netconsole | 1.5. Configuring netconsole If disk logging fails or using the serial console is not possible, you might need to use kernel debugging. The netconsole kernel module enables to log kernel messages to another computer over the network. To be able to use netconsole , you need to have an rsyslog server that is properly configured on your network. Procedure 1.1. Configuring an rsyslog server for netconsole Configure the rsyslogd daemon to listen on the 514/udp port and receive messages from the network by uncommenting the following lines in the MODULES section of the /etc/rsyslog.conf file: Restart the rsyslogd service for the changes to take effect: Verify that rsyslogd is listening on the 514/udp port: The 0.0.0.0:syslog and [::]:syslog values in the netstat -l output mean that rsyslogd is listening on default netconsole port defined in the /etc/services file: Netconsole is configured using the /etc/sysconfig/netconsole file, which is a part of the initscripts package. This package is installed by default and it also provides the netconsole service. If you want to configure a sending machine, follow this procedure: Procedure 1.2. Configuring a Sending Machine Set the value of the SYSLOGADDR variable in the /etc/sysconfig/netconsole file to match the IP address of the syslogd server. For example: Restart the netconsole service for the changes to take effect: Enable netconsole.service to run after rebooting the system: View the netconsole messages from the client in the /var/log/messages file (default) or in the file specified in rsyslog.conf . Note By default, rsyslogd and netconsole.service use port 514. To use a different port, change the following line in /etc/rsyslog.conf to the required port number: On the sending machine, uncomment and edit the following line in the /etc/sysconfig/netconsole file: For more information about netconsole configuration and troubleshooting tips, see Netconsole Kernel Documentation . | [
"USDModLoad imudp USDUDPServerRun 514",
"]# systemctl restart rsyslog",
"]# netstat -l | grep syslog udp 0 0 0.0.0.0:syslog 0.0.0.0:* udp6 0 0 [::]:syslog [::]:*",
"]USD cat /etc/services | grep syslog syslog 514/udp syslog-conn 601/tcp # Reliable Syslog Service syslog-conn 601/udp # Reliable Syslog Service syslog-tls 6514/tcp # Syslog over TLS syslog-tls 6514/udp # Syslog over TLS syslog-tls 6514/dccp # Syslog over TLS",
"SYSLOGADDR= 192.168.0.1",
"]# systemctl restart netconsole.service",
"]# systemctl enable netconsole.service",
"]# cat /var/log/messages",
"USDUDPServerRun <PORT>",
"SYSLOGPORT=514"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_netconsole |
B.2. Storing Certificates in NSS Databases | B.2. Storing Certificates in NSS Databases By default, certmonger uses plaintext files to store the key and the certificate, but these keys and certificates can also be stored in NSS databases. This is done using the -d option to set the security database location and -n to give the certificate nickname which is used for the certificate in the database. These options are used instead of the PEM files given in the -f and -k options. For example: | [
"ipa-getcert request -d /export/alias -n ServerCert"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/working_with_certmonger-using_certmonger_with_nss |
Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1] | Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. status object ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. 4.1.1. .spec Description ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. Type object Required replicas selector template Property Type Description replicas integer Replicas defines how many Control Plane Machines should be created by this ControlPlaneMachineSet. This field is immutable and cannot be changed after cluster installation. The ControlPlaneMachineSet only operates with 3 or 5 node control planes, 3 and 5 are the only valid values for this field. selector object Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. state string State defines whether the ControlPlaneMachineSet is Active or Inactive. When Inactive, the ControlPlaneMachineSet will not take any action on the state of the Machines within the cluster. When Active, the ControlPlaneMachineSet will reconcile the Machines and will update the Machines as necessary. Once Active, a ControlPlaneMachineSet cannot be made Inactive. To prevent further action please remove the ControlPlaneMachineSet. strategy object Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. template object Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. 4.1.2. .spec.selector Description Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.5. .spec.strategy Description Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. Type object Property Type Description type string Type defines the type of update strategy that should be used when updating Machines owned by the ControlPlaneMachineSet. Valid values are "RollingUpdate" and "OnDelete". The current default value is "RollingUpdate". 4.1.6. .spec.template Description Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. Type object Required machineType Property Type Description machineType string MachineType determines the type of Machines that should be managed by the ControlPlaneMachineSet. Currently, the only valid value is machines_v1beta1_machine_openshift_io. machines_v1beta1_machine_openshift_io object OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. 4.1.7. .spec.template.machines_v1beta1_machine_openshift_io Description OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. Type object Required metadata spec Property Type Description failureDomains object FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. metadata object ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. spec object Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. 4.1.8. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains Description FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. Type object Required platform Property Type Description aws array AWS configures failure domain information for the AWS platform. aws[] object AWSFailureDomain configures failure domain information for the AWS platform. azure array Azure configures failure domain information for the Azure platform. azure[] object AzureFailureDomain configures failure domain information for the Azure platform. gcp array GCP configures failure domain information for the GCP platform. gcp[] object GCPFailureDomain configures failure domain information for the GCP platform platform string Platform identifies the platform for which the FailureDomain represents. Currently supported values are AWS, Azure, and GCP. 4.1.9. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws Description AWS configures failure domain information for the AWS platform. Type array 4.1.10. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[] Description AWSFailureDomain configures failure domain information for the AWS platform. Type object Property Type Description placement object Placement configures the placement information for this instance. subnet object Subnet is a reference to the subnet to use for this instance. 4.1.11. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].placement Description Placement configures the placement information for this instance. Type object Required availabilityZone Property Type Description availabilityZone string AvailabilityZone is the availability zone of the instance. 4.1.12. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet Description Subnet is a reference to the subnet to use for this instance. Type object Required type Property Type Description arn string ARN of resource. filters array Filters is a set of filters used to identify a resource. filters[] object AWSResourceFilter is a filter used to identify an AWS resource id string ID of resource. type string Type determines how the reference will fetch the AWS resource. 4.1.13. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters Description Filters is a set of filters used to identify a resource. Type array 4.1.14. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters[] Description AWSResourceFilter is a filter used to identify an AWS resource Type object Required name Property Type Description name string Name of the filter. Filter names are case-sensitive. values array (string) Values includes one or more filter values. Filter values are case-sensitive. 4.1.15. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure Description Azure configures failure domain information for the Azure platform. Type array 4.1.16. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure[] Description AzureFailureDomain configures failure domain information for the Azure platform. Type object Required zone Property Type Description zone string Availability Zone for the virtual machine. If nil, the virtual machine should be deployed to no zone. 4.1.17. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp Description GCP configures failure domain information for the GCP platform. Type array 4.1.18. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp[] Description GCPFailureDomain configures failure domain information for the GCP platform Type object Required zone Property Type Description zone string Zone is the zone in which the GCP machine provider will create the VM. 4.1.19. .spec.template.machines_v1beta1_machine_openshift_io.metadata Description ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. Type object Required labels Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels . This field must contain both the 'machine.openshift.io/cluster-api-machine-role' and 'machine.openshift.io/cluster-api-machine-type' labels, both with a value of 'master'. It must also contain a label with the key 'machine.openshift.io/cluster-api-cluster'. 4.1.20. .spec.template.machines_v1beta1_machine_openshift_io.spec Description Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 4.1.21. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 4.1.22. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 4.1.23. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.24. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 4.1.25. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.26. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 4.1.27. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 4.1.28. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 4.1.29. .spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 4.1.30. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 4.1.31. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 4.1.32. .status Description ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. Type object Property Type Description conditions array Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } observedGeneration integer ObservedGeneration is the most recent generation observed for this ControlPlaneMachineSet. It corresponds to the ControlPlaneMachineSets's generation, which is updated on mutation by the API Server. readyReplicas integer ReadyReplicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller which are ready. Note that this value may be higher than the desired number of replicas while rolling updates are in-progress. replicas integer Replicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller. Note that during update operations this value may differ from the desired replica count. unavailableReplicas integer UnavailableReplicas is the number of Control Plane Machines that are still required before the ControlPlaneMachineSet reaches the desired available capacity. When this value is non-zero, the number of ReadyReplicas is less than the desired Replicas. updatedReplicas integer UpdatedReplicas is the number of non-terminated Control Plane Machines created by the ControlPlaneMachineSet controller that have the desired provider spec and are ready. This value is set to 0 when a change is detected to the desired spec. When the update strategy is RollingUpdate, this will also coincide with starting the process of updating the Machines. When the update strategy is OnDelete, this value will remain at 0 until a user deletes an existing replica and its replacement has become ready. 4.1.33. .status.conditions Description Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. Type array 4.1.34. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 4.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1/controlplanemachinesets GET : list objects of kind ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets DELETE : delete collection of ControlPlaneMachineSet GET : list objects of kind ControlPlaneMachineSet POST : create a ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} DELETE : delete a ControlPlaneMachineSet GET : read the specified ControlPlaneMachineSet PATCH : partially update the specified ControlPlaneMachineSet PUT : replace the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale GET : read scale of the specified ControlPlaneMachineSet PATCH : partially update scale of the specified ControlPlaneMachineSet PUT : replace scale of the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status GET : read status of the specified ControlPlaneMachineSet PATCH : partially update status of the specified ControlPlaneMachineSet PUT : replace status of the specified ControlPlaneMachineSet 4.2.1. /apis/machine.openshift.io/v1/controlplanemachinesets Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty 4.2.2. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ControlPlaneMachineSet Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ControlPlaneMachineSet Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 202 - Accepted ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.3. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ControlPlaneMachineSet Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControlPlaneMachineSet Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControlPlaneMachineSet Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControlPlaneMachineSet Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.4. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale Table 4.25. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ControlPlaneMachineSet Table 4.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ControlPlaneMachineSet Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body Patch schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ControlPlaneMachineSet Table 4.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.33. Body parameters Parameter Type Description body Scale schema Table 4.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 4.2.5. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status Table 4.35. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet namespace string object name and auth scope, such as for teams and projects Table 4.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ControlPlaneMachineSet Table 4.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.38. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ControlPlaneMachineSet Table 4.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.40. Body parameters Parameter Type Description body Patch schema Table 4.41. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ControlPlaneMachineSet Table 4.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.43. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.44. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/controlplanemachineset-machine-openshift-io-v1 |
Chapter 4. Set Up Expiration | Chapter 4. Set Up Expiration 4.1. About Expiration Red Hat JBoss Data Grid uses expiration to attach one or both of the following values to an entry: A lifespan value. A maximum idle time value. Expiration can be specified on a per-entry or per-cache basis and the per-entry configuration overrides per-cache configurations. If expiration is configured at the cache level, then the expiration defaults apply to all entries which do not explicitly specify a lifespan or maxIdle value. If expiration is not configured at the cache level, cache entries are created immortal (i.e. they will never expire) by default. Any entries that have lifespan or maxIdle defined are mortal, as they will eventually be removed from the cache once one of these conditions are met. Expired entries, unlike evicted entries, are removed globally, which removes them from memory, cache stores and the cluster. Expiration automates the removal of entries that have not been used for a specified period of time from the memory. Expiration and eviction are different because: expiration removes entries based on the period they have been in memory. Expiration only removes entries when the life span period concludes or when an entry has been idle longer than the specified idle time. eviction removes entries based on how recently (and often) they are used. Eviction only removes entries when too many entries are present in the memory. If a cache store has been configured, evicted entries are persisted in the cache store. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Set_Up_Expiration |
Updating | Updating Red Hat Enterprise Linux AI 1.2 Upgrading your RHEL AI system and models Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/updating/index |
7.222. usbredir | 7.222. usbredir 7.222.1. RHBA-2015:1381 - usbredir bug fix update Updated usbredir packages that fix one bug are now available for Red Hat Enterprise Linux 6. The usbredir packages provide a network protocol for sending USB device traffic over a network connection and a number of libraries to help implement support for this protocol. Bug Fix BZ# 1085318 Previously, USB redirection over plain Transmission Control Protocol (TCP) sockets with the usbredir packages installed did not work. The USB was not properly redirected in this situation, even though USB redirection over Spice channels worked as expected. This update fixes a bug in the usbredir protocol parser that was causing this problem. As a result, USB redirection over plain TCP sockets now works as expected. Users of usbredir are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-usbredir |
Updating OpenShift Data Foundation | Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.16 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/updating_openshift_data_foundation/index |
Chapter 3. Getting support | Chapter 3. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/windows-containers-support |
RHACS Cloud Service | RHACS Cloud Service Red Hat Advanced Cluster Security for Kubernetes 4.5 About the RHACS Cloud Service Red Hat OpenShift Documentation Team | [
"/sys/kernel/btf/vmlinux /boot/vmlinux-<kernel-version> /lib/modules/<kernel-version>/vmlinux-<kernel-version> /lib/modules/<kernel-version>/build/vmlinux /usr/lib/modules/<kernel-version>/kernel/vmlinux /usr/lib/debug/boot/vmlinux-<kernel-version> /usr/lib/debug/boot/vmlinux-<kernel-version>.debug /usr/lib/debug/lib/modules/<kernel-version>/vmlinux",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml",
"oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> 3 --set imagePullSecrets.username=<your redhat.com username> \\ 4 --set imagePullSecrets.password=<your redhat.com password> 5",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> 3 --set scanner.disable=false 4",
"customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3",
"helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"oc get pod -n stackrox -w",
"kubectl get pod -n stackrox -w",
"proxy collector customize: envVars: - name: HTTP_PROXY value: http://egress-proxy.stackrox.svc:xxxx 1 - name: HTTPS_PROXY value: http://egress-proxy.stackrox.svc:xxxx 2 - name: NO_PROXY value: .stackrox.svc 3",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml",
"oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2",
"kubectl create namespace stackrox 1 kubectl create -f <init_bundle>.yaml \\ 2 -n <stackrox> 3",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3",
"helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"kubectl get pod -n stackrox -w",
"oc -n rhacs-operator delete subscription rhacs-operator",
"kubectl -n rhacs-operator delete subscription rhacs-operator",
"oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1",
"Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed",
"-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1",
"helm repo update",
"helm search repo -l rhacs/",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --version <current-rhacs-version> \\ 1 -f values-private.yaml",
"ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}",
"oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -",
"oc -n stackrox create -f ./update-scs.yaml",
"oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"oc -n stackrox edit deploy/sensor 1",
"oc -n stackrox edit deploy/collector 1",
"oc -n stackrox edit deploy/admission-control 1",
"oc get deploy,ds -n stackrox -o wide 1",
"oc get pod -n stackrox --watch 1",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/rhacs_cloud_service/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, open a Jira issue that describes your concerns. Provide as much detail as possible so that your request can be addressed quickly. Prerequisites You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure To provide your feedback, perform the following steps: Click the following link: Create Issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide more details about the issue. Include the URL where you found the issue. Provide information for any other required fields. Allow all fields that contain default information to remain at the defaults. Click Create to create the Jira issue for the documentation team. A documentation issue will be created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/using_red_hat_discovery/proc-providing-feedback-on-redhat-documentation |
Chapter 39. Authentication and Interoperability | Chapter 39. Authentication and Interoperability Use of AD and LDAP sudo providers The Active Directory (AD) provider is a back end used to connect to an AD server. Starting with Red Hat Enterprise Linux 7.2, using the AD sudo provider together with the LDAP provider is available as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file. (BZ# 1068725 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/ch-Configure_Host_Names.html#sec-Recommended_Naming_Practices . (BZ# 1115294 ) Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see https://access.redhat.com/articles/2728021 (BZ# 1298286 ) The Custodia secrets service provider is now available As a Technology Preview, you can now use Custodia, a secrets service provider. Custodia stores or serves as a proxy for secrets, such as keys or passwords. For details, see the upstream documentation at http://custodia.readthedocs.io . (BZ# 1403214 ) Containerized Identity Management server available as Technology Preview The rhel7/ipa-server container image is available as a Technology Preview feature. Note that the rhel7/sssd container image is now fully supported. For details, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/using_containerized_identity_management_services . (BZ# 1405325 , BZ#1405326) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_authentication_and_interoperability |
About Red Hat Developer Hub | About Red Hat Developer Hub Red Hat Developer Hub 1.4 Introduction to Red Hat Developer Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/about_red_hat_developer_hub/index |
1.4. DM Multipath Components | 1.4. DM Multipath Components Table 1.1, "DM Multipath Components" . describes the components of DM Multipath. Table 1.1. DM Multipath Components Component Description dm_multipath kernel module Reroutes I/O and supports failover for paths and path groups. mpathconf utility Configures and enables device mapper multipathing. multipath command Lists and configures multipath devices. Normally started with /etc/rc.sysinit , it can also be started by a udev program whenever a block device is added. multipathd daemon Monitors paths; as paths fail and come back, it may initiate path group switches. Allows interactive changes to multipath devices. The daemon must be restarted following any changes to the /etc/multipath.conf file. kpartx command Creates device mapper devices for the partitions on a device. It is necessary to use this command for DOS-based partitions with DM Multipath. The kpartx command is provided in its own package, but the device-mapper-multipath package depends on it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/MPIO_Components |
Chapter 5. Using the management API | Chapter 5. Using the management API AMQ Broker has an extensive management API, which you can use to modify a broker's configuration, create new resources (for example, addresses and queues), inspect these resources (for example, how many messages are currently held in a queue), and interact with them (for example, to remove messages from a queue). In addition, clients can use the management API to manage the broker and subscribe to management notifications. 5.1. Methods for managing AMQ Broker using the management API There are two ways to use the management API to manage the broker: Using JMX - JMX is the standard way to manage Java applications Using the JMS API - management operations are sent to the broker using JMS messages and the AMQ JMS client Although there are two different ways to manage the broker, each API supports the same functionality. If it is possible to manage a resource using JMX it is also possible to achieve the same result by using JMS messages and the AMQ JMS client. This choice depends on your particular requirements, application settings, and environment. Regardless of the way you invoke management operations, the management API is the same. For each managed resource, there exists a Java interface describing what can be invoked for this type of resource. The broker exposes its managed resources in the org.apache.activemq.artemis.api.core.management package. The way to invoke management operations depends on whether JMX messages or JMS messages and the AMQ JMS client are used. Note Some management operations require a filter parameter to choose which messages are affected by the operation. Passing null or an empty string means that the management operation will be performed on all messages . 5.2. Managing AMQ Broker using JMX You can use Java Management Extensions (JMX) to manage a broker. The management API is exposed by the broker using MBeans interfaces. The broker registers its resources with the domain org.apache.activemq . For example, the ObjectName to manage a queue named exampleQueue is: org.apache.activemq.artemis:broker="__BROKER_NAME__",component=addresses,address="exampleQueue",subcomponent=queues,routingtype="anycast",queue="exampleQueue" The MBean is: org.apache.activemq.artemis.api.management.QueueControl The MBean's ObjectName is built using the helper class org.apache.activemq.artemis.api.core.management.ObjectNameBuilder . You can also use jconsole to find the ObjectName of the MBeans you want to manage. Managing the broker using JMX is identical to management of any Java applications using JMX. It can be done by reflection or by creating proxies of the MBeans. 5.2.1. Configuring JMX management By default, JMX is enabled to manage the broker. You can enable or disable JMX management by setting the jmx-management-enabled property in the broker.xml configuration file. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Set <jmx-management-enabled> . <jmx-management-enabled>true</jmx-management-enabled> If JMX is enabled, the broker can be managed locally using jconsole . Note Remote connections to JMX are not enabled by default for security reasons. If you want to manage multiple brokers from the same MBeanServer , configure the JMX domain for each of the brokers. By default, the broker uses the JMX domain org.apache.activemq.artemis . <jmx-domain>my.org.apache.activemq</jmx-domain> Note If you are using AMQ Broker on a Windows system, system properties must be set in artemis , or artemis.cmd . A shell script is located under <install-dir> /bin . Additional resources For more information on configuring the broker for remote management, see Oracle's Java Management Guide . 5.2.2. MBeanServer configuration When the broker runs in standalone mode, it uses the Java Virtual Machine's Platform MBeanServer to register its MBeans. By default, Jolokia is also deployed to allow access to the MBean server using REST. 5.2.3. How JMX is exposed with Jolokia By default, AMQ Broker ships with the Jolokia HTTP agent deployed as a web application. Jolokia is a remote JMX over HTTP bridge that exposes MBeans. Note To use Jolokia, the user must belong to the role defined by the hawtio.role system property in the <broker-instance-dir> /etc/artemis.profile configuration file. By default, this role is amq . Example 5.1. Using Jolokia to query the broker's version This example uses a Jolokia REST URL to find the version of a broker. The Origin flag should specify the domain name or DNS host name for the broker server. In addition, the value you specify for Origin must correspond to an entry for <allow-origin> in your Jolokia Cross-Origin Resource Sharing (CORS) specification. USD curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\"0.0.0.0\"/Version -H "Origin: mydomain.com" {"request":{"mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\"","attribute":"Version","type":"read"},"value":"2.4.0.amq-710002-redhat-1","timestamp":1527105236,"status":200} Additional resources For more information on using a JMX-HTTP bridge, see the Jolokia documentation . For more information on assigning a user to a role, see Adding Users . For more information on specifying Jolokia Cross-Origin Resource Sharing (CORS), see section 4.1.5 of Security . 5.2.4. Subscribing to JMX management notifications If JMX is enabled in your environment, you can subscribe to management notifications. Procedure Subscribe to ObjectName org.apache.activemq.artemis:broker=" <broker-name> " . Additional resources For more information about management notifications, see Section 5.5, "Management notifications" . 5.3. Managing AMQ Broker using the JMS API The Java Message Service (JMS) API allows you to create, send, receive, and read messages. You can use JMS and the AMQ JMS client to manage brokers. 5.3.1. Configuring broker management using JMS messages and the AMQ JMS Client To use JMS to manage a broker, you must first configure the broker's management address with the manage permission. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Add the <management-address> element, and specify a management address. By default, the management address is queue.activemq.management . You only need to specify a different address if you do not want to use the default. <management-address>my.management.address</management-address> Provide the management address with the manage user permission type. This permission type enables the management address to receive and handle management messages. <security-setting-match="queue.activemq.management"> <permission-type="manage" roles="admin"/> </security-setting> 5.3.2. Managing brokers using the JMS API and AMQ JMS Client To invoke management operations using JMS messages, the AMQ JMS client must instantiate the special management queue. Procedure Create a QueueRequestor to send messages to the management address and receive replies. Create a Message . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to fill the message with the management properties. Send the message using the QueueRequestor . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to retrieve the operation result from the management reply. Example 5.2. Viewing the number of messages in a queue This example shows how to use the JMS API to view the number of messages in the JMS queue exampleQueue : Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management"); QueueSession session = ... QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, "queue.exampleQueue", "messageCount"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println("There are " + count + " messages in exampleQueue"); 5.4. Management operations Whether you are using JMX or JMS messages to manage AMQ Broker, you can use the same API management operations. Using the management API, you can manage brokers, addresses, and queues. 5.4.1. Broker management operations You can use the management API to manage your brokers. Listing, creating, deploying, and destroying queues A list of deployed queues can be retrieved using the getQueueNames() method. Queues can be created or destroyed using the management operations createQueue() , deployQueue() , or destroyQueue() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). createQueue will fail if the queue already exists while deployQueue will do nothing. Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. Listing and closing remote connections Retrieve a client's remote addresses by using listRemoteAddresses() . It is also possible to close the connections associated with a remote address using the closeConnectionsForAddress() method. Alternatively, list connection IDs using listConnectionIDs() and list all the sessions for a given connection ID using listSessions() . Managing transactions In case of a broker crash, when the broker restarts, some transactions might require manual intervention. Use the the following methods to help resolve issues you encounter. List the transactions which are in the prepared states (the transactions are represented as opaque Base64 Strings) using the listPreparedTransactions() method lists. Commit or rollback a given prepared transaction using commitPreparedTransaction() or rollbackPreparedTransaction() to resolve heuristic transactions. List heuristically completed transactions using the listHeuristicCommittedTransactions() and listHeuristicRolledBackTransactions methods. Enabling and resetting message counters Enable and disable message counters using the enableMessageCounters() or disableMessageCounters() method. Reset message counters by using the resetAllMessageCounters() and resetAllMessageCounterHistories() methods. Retrieving broker configuration and attributes The ActiveMQServerControl exposes the broker's configuration through all its attributes (for example, getVersion() method to retrieve the broker's version, and so on). Listing, creating, and destroying Core Bridge and diverts List deployed Core Bridge and diverts using the getBridgeNames() and getDivertNames() methods respectively. Create or destroy using bridges and diverts using createBridge() and destroyBridge() or createDivert() and destroyDivert() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). Stopping the broker and forcing failover to occur with any currently attached clients Use the forceFailover() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Note Because this method actually stops the broker, you will likely receive an error. The exact error depends on the management service you used to call the method. 5.4.2. Address management operations You can use the management API to manage addresses. Manage addresses using the AddressControl class with ObjectName org.apache.activemq.artemis:broker=" <broker-name> ", component=addresses,address=" <address-name> " or the resource name address. <address-name> . Modify roles and permissions for an address using the addRole() or removeRole() methods. You can list all the roles associated with the queue with the getRoles() method. 5.4.3. Queue management operations You can use the management API to manage queues. The core management API deals with queues. The QueueControl class defines the queue management operations (with the ObjectName , org.apache.activemq.artemis:broker=" <broker-name> ",component=addresses,address=" <bound-address> ",subcomponent=queues,routing-type=" <routing-type> ",queue=" <queue-name> " or the resource name queue. <queue-name> ). Most of the management operations on queues take either a single message ID (for example, to remove a single message) or a filter (for example, to expire all messages with a given property). Expiring, sending to a dead letter address, and moving messages Expire messages from a queue using the expireMessages() method. If an expiry address is defined, messages are sent to this address, otherwise they are discarded. You can define the expiry address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Send messages to a dead letter address using the sendMessagesToDeadLetterAddress() method. This method returns the number of messages sent to the dead letter address. If a dead letter address is defined, messages are sent to this address, otherwise they are removed from the queue and discarded. You can define the dead letter address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Move messages from one queue to another using the moveMessages() method. Listing and removing messages List messages from a queue using the listMessages() method. It will return an array of Map , one Map for each message. Remove messages from a queue using the removeMessages() method, which returns a boolean for the single message ID variant or the number of removed messages for the filter variant. This method takes a filter argument to remove only filtered messages. Setting the filter to an empty string will in effect remove all messages. Counting messages The number of messages in a queue is returned by the getMessageCount() method. Alternatively, the countMessages() will return the number of messages in the queue which match a given filter. Changing message priority The message priority can be changed by using the changeMessagesPriority() method which returns a boolean for the single message ID variant or the number of updated messages for the filter variant. Message counters Message counters can be listed for a queue with the listMessageCounter() and listMessageCounterHistory() methods (see Section 5.6, "Using message counters" ). The message counters can also be reset for a single queue using the resetMessageCounter() method. Retrieving the queue attributes The QueueControl exposes queue settings through its attributes (for example, getFilter() to retrieve the queue's filter if it was created with one, isDurable() to know whether the queue is durable, and so on). Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. 5.4.4. Remote resource management operations You can use the management API to start and stop a broker's remote resources (acceptors, diverts, bridges, and so on) so that the broker can be taken offline for a given period of time without stopping completely. Acceptors Start or stop an acceptor using the start() or. stop() method on the AcceptorControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=acceptors,name=" <acceptor-name> " or the resource name acceptor. <address-name> ). Acceptor parameters can be retrieved using the AcceptorControl attributes. See Network Connections: Acceptors and Connectors for more information about Acceptors. Diverts Start or stop a divert using the start() or stop() method on the DivertControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=diverts,name=" <divert-name> " or the resource name divert. <divert-name> ). Divert parameters can be retrieved using the DivertControl attributes. Bridges Start or stop a bridge using the start() (resp. stop() ) method on the BridgeControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=bridge,name=" <bridge-name> " or the resource name bridge. <bridge-name> ). Bridge parameters can be retrieved using the BridgeControl attributes. Broadcast groups Start or stop a broadcast group using the start() or stop() method on the BroadcastGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=broadcast-group,name=" <broadcast-group-name> " or the resource name broadcastgroup. <broadcast-group-name> ). Broadcast group parameters can be retrieved using the BroadcastGroupControl attributes. See Broker discovery methods for more information. Discovery groups Start or stop a discovery group using the start() or stop() method on the DiscoveryGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=discovery-group,name=" <discovery-group-name> " or the resource name discovery. <discovery-group-name> ). Discovery groups parameters can be retrieved using the DiscoveryGroupControl attributes. See Broker discovery methods for more information. Cluster connections Start or stop a cluster connection using the start() or stop() method on the ClusterConnectionControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=cluster-connection,name=" <cluster-connection-name> " or the resource name clusterconnection. <cluster-connection-name> ). Cluster connection parameters can be retrieved using the ClusterConnectionControl attributes. See Creating a broker cluster for more information. 5.5. Management notifications Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _AMQ_NotifType (value noted in parentheses) and _AMQ_NotifTimestamp header. The time stamp is the unformatted result of a call to java.lang.System.currentTimeMillis() . Notification type Headers BINDING_ADDED (0) _AMQ_Binding_Type _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString BINDING_REMOVED (1) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString CONSUMER_CREATED (2) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString CONSUMER_CLOSED (3) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString SECURITY_AUTHENTICATION_VIOLATION (6) _AMQ_User SECURITY_PERMISSION_VIOLATION (7) _AMQ_Address _AMQ_CheckType _AMQ_User DISCOVERY_GROUP_STARTED (8) name DISCOVERY_GROUP_STOPPED (9) name BROADCAST_GROUP_STARTED (10) name BROADCAST_GROUP_STOPPED (11) name BRIDGE_STARTED (12) name BRIDGE_STOPPED (13) name CLUSTER_CONNECTION_STARTED (14) name CLUSTER_CONNECTION_STOPPED (15) name ACCEPTOR_STARTED (16) factory id ACCEPTOR_STOPPED (17) factory id PROPOSAL (18) _JBM_ProposalGroupId _JBM_ProposalValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance PROPOSAL_RESPONSE (19) _JBM_ProposalGroupId _JBM_ProposalValue _JBM_ProposalAltValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance CONSUMER_SLOW (21) _AMQ_Address _AMQ_ConsumerCount _AMQ_RemoteAddress _AMQ_ConnectionName _AMQ_ConsumerName _AMQ_SessionName 5.6. Using message counters You use message counters to obtain information about queues over time. This helps you to identify trends that would otherwise be difficult to see. For example, you could use message counters to determine how a particular queue is being used over time. You could also attempt to obtain this information by using the management API to query the number of messages in the queue at regular intervals, but this would not show how the queue is actually being used. The number of messages in a queue can remain constant because no clients are sending or receiving messages on it, or because the number of messages sent to the queue is equal to the number of messages consumed from it. In both of these cases, the number of messages in the queue remains the same even though it is being used in very different ways. 5.6.1. Types of message counters Message counters provide additional information about queues on a broker. count The total number of messages added to the queue since the broker was started. countDelta The number of messages added to the queue since the last message counter update. lastAckTimestamp The time stamp of the last time a message from the queue was acknowledged. lastAddTimestamp The time stamp of the last time a message was added to the queue. messageCount The current number of messages in the queue. messageCountDelta The overall number of messages added/removed from the queue since the last message counter update. For example, if messageCountDelta is -10 , then 10 messages overall have been removed from the queue. udpateTimestamp The time stamp of the last message counter update. Note You can combine message counters to determine other meaningful data as well. For example, to know specifically how many messages were consumed from the queue since the last update, you would subtract the messageCountDelta from countDelta . 5.6.2. Enabling message counters Message counters can have a small impact on the broker's memory; therefore, they are disabled by default. To use message counters, you must first enable them. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Enable message counters. <message-counter-enabled>true</message-counter-enabled> Set the message counter history and sampling period. <message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period> message-counter-max-day-history The number of days the broker should store queue metrics. The default is 10 days. message-counter-sample-period How often (in milliseconds) the broker should sample its queues to collect metrics. The default is 10000 milliseconds. 5.6.3. Retrieving message counters You can use the management API to retrieve message counters. Prerequisites Message counters must be enabled on the broker. For more information, see Section 5.6.2, "Enabling message counters" . Procedure Use the management API to retrieve message counters. // Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = ... JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format("%s message(s) in the queue (since last sample: %s)\n", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta()); Additional resources For more information about message counters, see Section 5.4.3, "Queue management operations" . | [
"org.apache.activemq.artemis:broker=\"__BROKER_NAME__\",component=addresses,address=\"exampleQueue\",subcomponent=queues,routingtype=\"anycast\",queue=\"exampleQueue\"",
"org.apache.activemq.artemis.api.management.QueueControl",
"<jmx-management-enabled>true</jmx-management-enabled>",
"<jmx-domain>my.org.apache.activemq</jmx-domain>",
"curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"/Version -H \"Origin: mydomain.com\" {\"request\":{\"mbean\":\"org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"\",\"attribute\":\"Version\",\"type\":\"read\"},\"value\":\"2.4.0.amq-710002-redhat-1\",\"timestamp\":1527105236,\"status\":200}",
"<management-address>my.management.address</management-address>",
"<security-setting-match=\"queue.activemq.management\"> <permission-type=\"manage\" roles=\"admin\"/> </security-setting>",
"Queue managementQueue = ActiveMQJMSClient.createQueue(\"activemq.management\"); QueueSession session = QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, \"queue.exampleQueue\", \"messageCount\"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println(\"There are \" + count + \" messages in exampleQueue\");",
"<message-counter-enabled>true</message-counter-enabled>",
"<message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period>",
"// Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format(\"%s message(s) in the queue (since last sample: %s)\\n\", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta());"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/managing_amq_broker/management-api-managing |
Chapter 7. Message delivery | Chapter 7. Message delivery 7.1. Writing to a streamed large message To write to a large message, use the BytesMessage.writeBytes() method. The following example reads bytes from a file and writes them to a message: Example: Writing to a streamed large message BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); } 7.2. Reading from a streamed large message To read from a large message, use the BytesMessage.readBytes() method. The following example reads bytes from a message and writes them to a file: Example: Reading from a streamed large message BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); } 7.3. Using message groups Message groups are sets of messages that have the following characteristics: Messages in a message group share the same group ID. That is, they have same group identifier property. For JMS messages, the property is JMSXGroupID . Messages in a message group are always consumed by the same consumer, even if there are many consumers on a queue. Another consumer is chosen to receive a message group if the original consumer is closed. Message groups are useful when you want all messages for a certain value of the property to be processed serially by the same consumer. For example, you may want orders for any particular stock purchase to be processed serially by the same consumer. To do this, you could create a pool of consumers and then set the stock name as the value of the message property. This ensures that all messages for a particular stock are always processed by the same consumer. Setting the group ID The examples below show how to use message groups with AMQ Core Protocol JMS. Procedure If you are using JNDI to establish a JMS connection factory for your JMS client, add the groupID parameter and supply a value. All messages sent using this connection factory have the property JMSXGroupID set to the specified value. If you are not using JNDI, set the JMSXGroupID property using the setStringProperty() method. Message message = new TextMessage(); message.setStringProperty("JMSXGroupID", "MyGroup"); producer.send(message); Additional resources See message-group and message-group2 under <install-dir> /examples/features/standard for working examples of how message groups are configured and used. 7.4. Using duplicate message detection AMQ Broker includes automatic duplicate message detection, which filters out any duplicate messages it receives so you do not have to code your own duplicate detection logic. To enable duplicate message detection, provide a unique value for the message property _AMQ_DUPL_ID . When a broker receives a message, it checks if _AMQ_DUPL_ID has a value. If it does, the broker then checks in its memory cache to see if it has already received a message with that value. If a message with the same value is found, the incoming message is ignored. If you are sending messages in a transaction, you do not have to set _AMQ_DUPL_ID for every message in the transaction, but only in one of them. If the broker detects a duplicate message for any message in the transaction, it ignores the entire transaction. Setting the duplicate ID message property The following example shows how to set the duplicate detection property using AMQ Core Protocol JMS. Note that for convenience, the clients use the value of the constant org.apache.activemq.artemis.api.core.Message.HDR_DUPLICATE_DETECTION_ID for the name of the duplicate ID property, _AMQ_DUPL_ID . Procedure Set the value for _AMQ_DUPL_ID to a unique string value. Message jmsMessage = session.createMessage(); String myUniqueID = "This is my unique id"; message.setStringProperty(HDR_DUPLICATE_DETECTION_ID.toString(), myUniqueID); 7.5. Using message interceptors With AMQ Core Protocol JMS you can intercept packets entering or exiting the client, allowing you to audit packets or filter messages. Interceptors can change the packets that they intercept. This makes interceptors powerful, but also a feature that you should use with caution. Interceptors must implement the intercept() method, which returns a boolean value. If the returned value is true , the message packet continues onward. If the returned value is false , the process is aborted, no other interceptors are called, and the message packet is not processed further. Message interception occurs transparently to the main client code except when an outgoing packet is sent in blocking send mode. When an outgoing packet is sent with blocking enabled and that packet encounters an interceptor that returns false , an ActiveMQException is thrown to the caller. The thrown exception contains the name of the interceptor. Your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. The client interceptor classes and their dependencies must be added to the Java classpath of the client to be properly instantiated and invoked. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } | [
"BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); }",
"BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); }",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?groupID=MyGroup",
"Message message = new TextMessage(); message.setStringProperty(\"JMSXGroupID\", \"MyGroup\"); producer.send(message);",
"Message jmsMessage = session.createMessage(); String myUniqueID = \"This is my unique id\"; message.setStringProperty(HDR_DUPLICATE_DETECTION_ID.toString(), myUniqueID);",
"package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }"
] | https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/message_delivery |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.