title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.422_release_notes/making-open-source-more-inclusive |
Chapter 9. Policy enforcers | Chapter 9. Policy enforcers Policy Enforcement Point (PEP) is a design pattern and as such you can implement it in different ways. Red Hat build of Keycloak provides all the necessary means to implement PEPs for different platforms, environments, and programming languages. Red Hat build of Keycloak Authorization Services presents a RESTful API and leverages OAuth2 authorization capabilities for fine-grained authorization using a centralized authorization server. The Policy enforcers available by Red Hat build of Keycloak are: Java Policy enforcer - useful to be used in the Java client applications Javascript Policy enforcer - useful to be used in the applications secured by Red Hat build of Keycloak Javascript adapter 9.1. JavaScript integration for Policy Enforcer The Red Hat build of Keycloak Server comes with a JavaScript library you can use to interact with a resource server protected by a policy enforcer. This library is based on the Red Hat build of Keycloak JavaScript adapter, which can be integrated to allow your client to obtain permissions from a Red Hat build of Keycloak Server. You can obtain this library by installing it from NPM : npm install keycloak-js , you can create a KeycloakAuthorization instance as follows: import Keycloak from "keycloak-js"; import KeycloakAuthorization from "keycloak-js/authz"; const keycloak = new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); const authorization = new KeycloakAuthorization(keycloak); await keycloak.init(); // Now you can use the authorization object to interact with the server. The keycloak-js/authz library provides two main features: Obtain permissions from the server using a permission ticket, if you are accessing a UMA protected resource server. Obtain permissions from the server by sending the resources and scopes the application wants to access. In both cases, the library allows you to easily interact with both resource server and Red Hat build of Keycloak Authorization Services to obtain tokens with permissions your client can use as bearer tokens to access the protected resources on a resource server. 9.1.1. Handling authorization responses from a UMA-Protected resource server If a resource server is protected by a policy enforcer, it responds to client requests based on the permissions carried along with a bearer token. Typically, when you try to access a resource server with a bearer token that is lacking permissions to access a protected resource, the resource server responds with a 401 status code and a WWW-Authenticate header. HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="USD{realm}", as_uri="https://USD{host}:USD{port}/realms/USD{realm}", ticket="016f84e8-f9b9-11e0-bd6f-0021cc6004de" See UMA Authorization Process for more information. What your client needs to do is extract the permission ticket from the WWW-Authenticate header returned by the resource server and use the library to send an authorization request as follows: // prepare a authorization request with the permission ticket const authorizationRequest = { ticket }; // send the authorization request, if successful retry the request authorization.authorize(authorizationRequest).then((rpt) => { // onGrant }, () => { // onDeny }, () => { // onError }); The authorize function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization succeeds and the server returns an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. Most applications should use the onGrant callback to retry a request after a 401 response. Subsequent requests should include the RPT as a bearer token for retries. 9.1.2. Obtaining entitlements The keycloak-js/authz library provides an entitlement function that you can use to obtain an RPT from the server by providing the resources and scopes your client wants to access. Example about how to obtain an RPT with permissions for all resources and scopes the user can access authorization.entitlement("my-resource-server-id").then((rpt) => { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server }); Example about how to obtain an RPT with permissions for specific resources and scopes authorization.entitlement("my-resource-server", { permissions: [ { id: "Some Resource" } ] }).then((rpt) => { // onGrant }); When using the entitlement function, you must provide the client_id of the resource server you want to access. The entitlement function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization succeeds and the server returns an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. 9.1.3. Authorization request Both authorize and entitlement functions accept an authorization request object. This object can be set with the following properties: permissions An array of objects representing the resource and scopes. For instance: const authorizationRequest = { permissions: [ { id: "Some Resource", scopes: ["view", "edit"] } ] } metadata An object where its properties define how the authorization request should be processed by the server. response_include_resource_name A boolean value indicating to the server if resource names should be included in the RPT's permissions. If false, only the resource identifier is included. response_permissions_limit An integer N that defines a limit for the amount of permissions an RPT can have. When used together with rpt parameter, only the last N requested permissions will be kept in the RPT submit_request A boolean value indicating whether the server should create permission requests to the resources and scopes referenced by a permission ticket. This parameter will only take effect when used together with the ticket parameter as part of a UMA authorization process. 9.1.4. Obtaining the RPT If you have already obtained an RPT using any of the authorization functions provided by the library, you can always obtain the RPT as follows from the authorization object (assuming that it has been initialized by one of the techniques shown earlier): const rpt = authorization.rpt; | [
"npm install keycloak-js",
"import Keycloak from \"keycloak-js\"; import KeycloakAuthorization from \"keycloak-js/authz\"; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); const authorization = new KeycloakAuthorization(keycloak); await keycloak.init(); // Now you can use the authorization object to interact with the server.",
"HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm=\"USD{realm}\", as_uri=\"https://USD{host}:USD{port}/realms/USD{realm}\", ticket=\"016f84e8-f9b9-11e0-bd6f-0021cc6004de\"",
"// prepare a authorization request with the permission ticket const authorizationRequest = { ticket }; // send the authorization request, if successful retry the request authorization.authorize(authorizationRequest).then((rpt) => { // onGrant }, () => { // onDeny }, () => { // onError });",
"authorization.entitlement(\"my-resource-server-id\").then((rpt) => { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server });",
"authorization.entitlement(\"my-resource-server\", { permissions: [ { id: \"Some Resource\" } ] }).then((rpt) => { // onGrant });",
"const authorizationRequest = { permissions: [ { id: \"Some Resource\", scopes: [\"view\", \"edit\"] } ] }",
"const rpt = authorization.rpt;"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/authorization_services_guide/enforcer_overview |
Chapter 7. References | Chapter 7. References 7.1. Red Hat Configuring RHEL 8 for SAP HANA2 installation Configuring and managing high availability clusters Support Policies for RHEL High Availability Clusters Support Policies for RHEL High Availability Clusters - Fencing/STONITH Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications Configuring quorum devices The Systemd-Based SAP Startup Framework Why does the stop operation of a SAPHana resource agent fail when the systemd based SAP startup framework is enabled? 7.2. SAP SAP HANA Server Installation and Update Guide SAP HANA System Replication Implementing a HA/DR Provider SAP Note 2057595 - FAQ: SAP HANA High Availability SAP Note 2063657 - SAP HANA System Replication Takeover Decision Guideline SAP Note 2235581 - SAP HANA: Supported Operating Systems SAP Note 2369981 - Required configuration steps for authentication with HANA System Replication SAP Note 2972496 - SAP HANA Filesystem Types SAP Note 3007062 - FAQ: SAP HANA & Third Party Cluster Solutions SAP Note 3115048 - sapstartsrv with native Linux systemd support SAP Note 3139184 - Linux: systemd integration for sapstartsrv and SAP Host Agent SAP Note 3189534 - Linux: systemd integration for sapstartsrv and SAP HANA 7.3. Other Be Prepared for Using Pacemaker Cluster for SAP HANA - Part 1: Basics Be Prepared for Using Pacemaker Cluster for SAP HANA - Part 2: Failure of Both Nodes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/asmb_references_automating-sap-hana-scale-out |
Chapter 17. Customizing the system in the installer | Chapter 17. Customizing the system in the installer During the customization phase of the installation, you must perform certain configuration tasks to enable the installation of Red Hat Enterprise Linux. These tasks include: Configuring the storage and assign mount points. Selecting a base environment with software to be installed. Setting a password for the root user or creating a local user. Optionally, you can further customize the system, for example, by configuring system settings and connecting the host to a network. 17.1. Setting the installer language You can select the language to be used by the installation program before starting the installation. Prerequisites You have created installation media. You have specified an installation source if you are using the Boot ISO image file. You have booted the installation. Procedure After you select Install Red hat Enterprise Linux option from the boot menu, the Welcome to Red Hat Enterprise Screen appears. From the left-hand pane of the Welcome to Red Hat Enterprise Linux window, select a language. Alternatively, search the preferred language by using the text box. Note A language is pre-selected by default. If network access is configured, that is, if you booted from a network server instead of local media, the pre-selected language is determined by the automatic location detection feature of the GeoIP module. If you use the inst.lang= option on the boot command line or in your PXE server configuration, then the language that you define with the boot option is selected. From the right-hand pane of the Welcome to Red Hat Enterprise Linux window, select a location specific to your region. Click Continue to proceed to the graphical installations window. If you are installing a pre-release version of Red Hat Enterprise Linux, a warning message is displayed about the pre-release status of the installation media. To continue with the installation, click I want to proceed , or To quit the installation and reboot the system, click I want to exit . 17.2. Configuring the storage devices You can install Red Hat Enterprise Linux on a large variety of storage devices. You can configure basic, locally accessible, storage devices in the Installation Destination window. Basic storage devices directly connected to the local system, such as disks and solid-state drives, are displayed in the Local Standard Disks section of the window. On 64-bit IBM Z, this section contains activated Direct Access Storage Devices (DASDs). Warning A known issue prevents DASDs configured as HyperPAV aliases from being automatically attached to the system after the installation is complete. These storage devices are available during the installation, but are not immediately accessible after you finish installing and reboot. To attach HyperPAV alias devices, add them manually to the /etc/dasd.conf configuration file of the system. 17.2.1. Configuring installation destination You can use the Installation Destination window to configure the storage options, for example, the disks that you want to use as the installation target for your Red Hat Enterprise Linux installation. You must select at least one disk. Prerequisites The Installation Summary window is open. Ensure to back up your data if you plan to use a disk that already contains data. For example, if you want to shrink an existing Microsoft Windows partition and install Red Hat Enterprise Linux as a second system, or if you are upgrading a release of Red Hat Enterprise Linux. Manipulating partitions always carries a risk. For example, if the process is interrupted or fails for any reason data on the disk can be lost. Procedure From the Installation Summary window, click Installation Destination . Perform the following operations in the Installation Destination window opens: From the Local Standard Disks section, select the storage device that you require; a white check mark indicates your selection. Disks without a white check mark are not used during the installation process; they are ignored if you choose automatic partitioning, and they are not available in manual partitioning. The Local Standard Disks shows all locally available storage devices, for example, SATA, IDE and SCSI disks, USB flash and external disks. Any storage devices connected after the installation program has started are not detected. If you use a removable drive to install Red Hat Enterprise Linux, your system is unusable if you remove the device. Optional: Click the Refresh link in the lower right-hand side of the window if you want to configure additional local storage devices to connect new disks. The Rescan Disks dialog box opens. Click Rescan Disks and wait until the scanning process completes. All storage changes that you make during the installation are lost when you click Rescan Disks . Click OK to return to the Installation Destination window. All detected disks including any new ones are displayed under the Local Standard Disks section. Optional: Click Add a disk to add a specialized storage device. The Storage Device Selection window opens and lists all storage devices that the installation program has access to. Optional: Under Storage Configuration , select the Automatic radio button for automatic partitioning. You can also configure custom partitioning. For more details, see Configuring manual partitioning . Optional: Select I would like to make additional space available to reclaim space from an existing partitioning layout. For example, if a disk you want to use already has a different operating system and you want to make this system's partitions smaller to allow more room for Red Hat Enterprise Linux. Optional: Select Encrypt my data to encrypt all partitions except the ones needed to boot the system (such as /boot ) using Linux Unified Key Setup (LUKS). Encrypting your disk to add an extra layer of security. Click Done . The Disk Encryption Passphrase dialog box opens. Type your passphrase in the Passphrase and Confirm fields. Click Save Passphrase to complete disk encryption. Warning If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, if you perform a Kickstart installation, you can save encryption passphrases and create backup encryption passphrases during the installation. For more information, see the Automatically installing RHEL document. Optional: Click the Full disk summary and bootloader link in the lower left-hand side of the window to select which storage device contains the boot loader. For more information, see Configuring boot loader . In most cases it is sufficient to leave the boot loader in the default location. Some configurations, for example, systems that require chain loading from another boot loader require the boot drive to be specified manually. Click Done . Optional: The Reclaim Disk Space dialog box appears if you selected automatic partitioning and the I would like to make additional space available option, or if there is not enough free space on the selected disks to install Red Hat Enterprise Linux. It lists all configured disk devices and all partitions on those devices. The dialog box displays information about the minimal disk space the system needs for an installation with the currently selected package set and how much space you have reclaimed. To start the reclaiming process: Review the displayed list of available storage devices. The Reclaimable Space column shows how much space can be reclaimed from each entry. Select a disk or partition to reclaim space. Use the Shrink button to use free space on a partition while preserving the existing data. Use the Delete button to delete that partition or all partitions on a selected disk including existing data. Use the Delete all button to delete all existing partitions on all disks including existing data and make this space available to install Red Hat Enterprise Linux. Click Reclaim space to apply the changes and return to graphical installations. No disk changes are made until you click Begin Installation on the Installation Summary window. The Reclaim Space dialog only marks partitions for resizing or deletion; no action is performed. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 17.2.2. Special cases during installation destination configuration Following are some special cases to consider when you are configuring installation destinations: Some BIOS types do not support booting from a RAID card. In these instances, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. It is necessary to use an internal disk for partition creation with problematic RAID cards. A /boot partition is also necessary for software RAID setups. If you choose to partition your system automatically, you should manually edit your /boot partition. To configure the Red Hat Enterprise Linux boot loader to chain load from a different boot loader, you must specify the boot drive manually by clicking the Full disk summary and bootloader link from the Installation Destination window. When you install Red Hat Enterprise Linux on a system with both multipath and non-multipath storage devices, the automatic partitioning layout in the installation program creates volume groups that contain a mix of multipath and non-multipath devices. This defeats the purpose of multipath storage. Select either multipath or non-multipath devices on the Installation Destination window. Alternatively, proceed to manual partitioning. 17.2.3. Configuring boot loader Red Hat Enterprise Linux uses GRand Unified Bootloader version 2 ( GRUB2 ) as the boot loader for AMD64 and Intel 64, IBM Power Systems, and ARM. For 64-bit IBM Z, the zipl boot loader is used. The boot loader is the first program that runs when the system starts and is responsible for loading and transferring control to an operating system. GRUB2 can boot any compatible operating system (including Microsoft Windows) and can also use chain loading to transfer control to other boot loaders for unsupported operating systems. Warning Installing GRUB2 may overwrite your existing boot loader. If an operating system is already installed, the Red Hat Enterprise Linux installation program attempts to automatically detect and configure the boot loader to start the other operating system. If the boot loader is not detected, you can manually configure any additional operating systems after you finish the installation. If you are installing a Red Hat Enterprise Linux system with more than one disk, you might want to manually specify the disk where you want to install the boot loader. Procedure From the Installation Destination window, click the Full disk summary and bootloader link. The Selected Disks dialog box opens. The boot loader is installed on the device of your choice, or on a UEFI system; the EFI system partition is created on the target device during guided partitioning. To change the boot device, select a device from the list and click Set as Boot Device . You can set only one device as the boot device. To disable a new boot loader installation, select the device currently marked for boot and click Do not install boot loader . This ensures GRUB2 is not installed on any device. Warning If you choose not to install a boot loader, you cannot boot the system directly and you must use another boot method, such as a standalone commercial boot loader application. Use this option only if you have another way to boot your system. The boot loader may also require a special partition to be created, depending on if your system uses BIOS or UEFI firmware, or if the boot drive has a GUID Partition Table (GPT) or a Master Boot Record (MBR, also known as msdos ) label. If you use automatic partitioning, the installation program creates the partition. 17.2.4. Storage device selection The storage device selection window lists all storage devices that the installation program can access. Depending on your system and available hardware, some tabs might not be displayed. The devices are grouped under the following tabs: Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. The installation program only detects multipath storage devices with serial numbers that are 16 or 32 characters long. Other SAN Devices Devices available on a Storage Area Network (SAN). Firmware RAID Storage devices attached to a firmware RAID controller. NVDIMM Devices Under specific circumstances, Red Hat Enterprise Linux 8 can boot and run from (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures. IBM Z Devices Storage devices, or Logical Units (LUNs), DASD, attached through the zSeries Linux FCP (Fiber Channel Protocol) driver. 17.2.5. Filtering storage devices In the storage device selection window you can filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN). Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the Search by tab to search by port, target, LUN, or WWID. Searching by WWID or LUN requires additional values in the corresponding input text fields. Select the option that you require from the Search drop-down menu. Click Find to start the search. Each device is presented on a separate row with a corresponding check box. Select the check box to enable the device that you require during the installation process. Later in the installation process you can choose to install Red Hat Enterprise Linux on any of the selected devices, and you can choose to mount any of the other selected devices as part of the installed system automatically. Selected devices are not automatically erased by the installation process and selecting a device does not put the data stored on the device at risk. Note You can add devices to the system after installation by modifying the /etc/fstab file. Click Done to return to the Installation Destination window. Any storage devices that you do not select are hidden from the installation program entirely. To chain load the boot loader from a different boot loader, select all the devices present. 17.2.6. Using advanced storage options To use an advanced storage device, you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre Channel over Ethernet) SAN (Storage Area Network). To use iSCSI storage devices for the installation, the installation program must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a user name and password for Challenge Handshake Authentication Protocol (CHAP) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached (reverse CHAP), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP. Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the user name and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps to add all required iSCSI storage. You cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. 17.2.6.1. Discovering and starting an iSCSI session The Red Hat Enterprise Linux installer can discover and log in to iSCSI disks in two ways: iSCSI Boot Firmware Table (iBFT) When the installer starts, it checks if the BIOS or add-on boot ROMs of the system support iBFT. It is a BIOS extension for systems that can boot from iSCSI. If the BIOS supports iBFT, the installer reads the iSCSI target information for the configured boot disk from the BIOS and logs in to this target, making it available as an installation target. To automatically connect to an iSCSI target, activate a network device for accessing the target. To do so, use the ip=ibft boot option. For more information, see Network boot options . Discover and add iSCSI targets manually You can discover and start an iSCSI session to identify available iSCSI targets (network storage devices) in the installer's graphical user interface. Prerequisites The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add iSCSI target . The Add iSCSI Storage Target window opens. Important You cannot place the /boot partition on iSCSI targets that you have manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, you can remove the /boot partition restriction using the inst.nonibftiscsiboot installer boot option. Enter the IP address of the iSCSI target in the Target IP Address field. Type a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN entry contains the following information: The string iqn. (note the period). A date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. Your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage . A colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example :diskarrays-sn-a8675309 . A complete IQN is as follows: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 . The installation program pre populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information about IQNs, see 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from tools.ietf.org and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from tools.ietf.org. Select the Discovery Authentication Type drop-down menu to specify the type of authentication to use for iSCSI discovery. The following options are available: No credentials CHAP pair CHAP pair and a reverse pair Do one of the following: If you selected CHAP pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password fields. If you selected CHAP pair and a reverse pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password field, and the user name and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Optional: Select the Bind targets to network interfaces check box. Click Start Discovery . The installation program attempts to discover an iSCSI target based on the information provided. If discovery succeeds, the Add iSCSI Storage Target window displays a list of all iSCSI nodes discovered on the target. Select the check boxes for the node that you want to use for installation. The Node login authentication type menu contains the same options as the Discovery Authentication Type menu. However, if you need credentials for discovery authentication, use the same credentials to log in to a discovered node. Click the additional Use the credentials from discovery drop-down menu. When you provide the proper credentials, the Log In button becomes available. Click Log In to initiate an iSCSI session. While the installer uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. The installer then copies this database to the installed system and marks any iSCSI targets that are not used for root partition, so that the system automatically logs in to them when it starts. If the root partition is placed on an iSCSI target, initrd logs into this target and the installer does not include this target in start up scripts to avoid multiple attempts to log into the same target. 17.2.6.2. Configuring FCoE parameters You can discover the FCoE (Fibre Channel over Ethernet) devices from the Installation Destination window by configuring the FCoE parameters accordingly. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add FCoE SAN . A dialog box opens for you to configure network interfaces for discovering FCoE storage devices. Select a network interface that is connected to an FCoE switch in the NIC drop-down menu. Click Add FCoE disk(s) to scan the network for SAN devices. Select the required check boxes: Use DCB: Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Select the check box to enable or disable the installation program's awareness of DCB. Enable this option only for network interfaces that require a host-based DCBX client. For configurations on interfaces that use a hardware DCBX client, disable the check box. Use auto vlan: Auto VLAN is enabled by default and indicates whether VLAN discovery should be performed. If this check box is enabled, then the FIP (FCoE Initiation Protocol) VLAN discovery protocol runs on the Ethernet interface when the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs are automatically created and FCoE instances are created on the VLAN interfaces. Discovered FCoE devices are displayed under the Other SAN Devices tab in the Installation Destination window. 17.2.6.3. Configuring DASD storage devices You can discover and configure the DASD storage devices from the Installation Destination window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add DASD ECKD . The Add DASD Storage Target dialog box opens and prompts you to specify a device number, such as 0.0.0204 , and attach additional DASDs that were not detected when the installation started. Type the device number of the DASD that you want to attach in the Device number field. Click Start Discovery . If a DASD with the specified device number is found and if it is not already attached, the dialog box closes and the newly-discovered drives appear in the list of drives. You can then select the check boxes for the required devices and click Done . The new DASDs are available for selection, marked as DASD device 0.0. xxxx in the Local Standard Disks section of the Installation Destination window. If you entered an invalid device number, or if the DASD with the specified device number is already attached to the system, an error message appears in the dialog box, explaining the error and prompting you to try again with a different device number. Additional resources Preparing an ECKD type DASD for use 17.2.6.4. Configuring FCP devices FCP devices enable 64-bit IBM Z to use SCSI devices rather than, or in addition to, Direct Access Storage Device (DASD) devices. FCP devices provide a switched fabric topology that enables 64-bit IBM Z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. Prerequisites The Installation Summary window is open. For an FCP-only installation, you have removed the DASD= option from the CMS configuration file or the rd.dasd= option from the parameter file to indicate that no DASD is present. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add ZFCP LUN . The Add zFCP Storage Target dialog box opens allowing you to add a FCP (Fibre Channel Protocol) storage device. 64-bit IBM Z requires that you enter any FCP device manually so that the installation program can activate FCP LUNs. You can enter FCP devices either in the graphical installation, or as a unique parameter entry in the parameter or CMS configuration file. The values that you enter must be unique to each site that you configure. Type the 4 digit hexadecimal device number in the Device number field. When installing RHEL-8.6 or older releases or if the zFCP device is not configured in NPIV mode, or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter, provide the following values: Type the 16 digit hexadecimal World Wide Port Number (WWPN) in the WWPN field. Type the 16 digit hexadecimal FCP LUN identifier in the LUN field. Click Start Discovery to connect to the FCP device. The newly-added devices are displayed in the IBM Z tab of the Installation Destination window. Use only lower-case letters in hex values. If you enter an incorrect value and click Start Discovery , the installation program displays a warning. You can edit the configuration information and retry the discovery attempt. For more information about these values, consult the hardware documentation and check with your system administrator. 17.2.7. Installing to an NVDIMM device Non-Volatile Dual In-line Memory Module (NVDIMM) devices combine the performance of RAM with disk-like data persistence when no power is supplied. Under specific circumstances, Red Hat Enterprise Linux 8 can boot and run from NVDIMM devices. 17.2.7.1. Criteria for using an NVDIMM device as an installation target You can install Red Hat Enterprise Linux 8 to Non-Volatile Dual In-line Memory Module (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures, supported by the nd_pmem driver. Conditions for using an NVDIMM device as storage To use an NVDIMM device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The NVDIMM device is configured to sector mode. The installation program can reconfigure NVDIMM devices to this mode. The NVDIMM device must be supported by the nd_pmem driver. Conditions for booting from an NVDIMM Device Booting from an NVDIMM device is possible under the following conditions: All conditions for using the NVDIMM device as storage are satisfied. The system uses UEFI. The NVDIMM device must be supported by firmware available on the system, or by an UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The NVDIMM device must be made available under a namespace. Utilize the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. 17.2.7.2. Configuring an NVDIMM device using the graphical installation mode A Non-Volatile Dual In-line Memory Module (NVDIMM) device must be properly configured for use by Red Hat Enterprise Linux 8 using the graphical installation. Warning Reconfiguration of a NVDIMM device process destroys any data stored on the device. Prerequisites A NVDIMM device is present on the system and satisfies all the other conditions for usage as an installation target. The installation has booted and the Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the NVDIMM Devices tab. To reconfigure a device, select it from the list. If a device is not listed, it is not in sector mode. Click Reconfigure NVDIMM . A reconfiguration dialog opens. Enter the sector size that you require and click Start Reconfiguration . The supported sector sizes are 512 and 4096 bytes. When reconfiguration completes click OK . Select the device check box. Click Done to return to the Installation Destination window. The NVDIMM device that you reconfigured is displayed in the Specialized & Network Disks section. Click Done to return to the Installation Summary window. The NVDIMM device is now available for you to select as an installation target. Additionally, if the device meets the requirements for booting, you can set the device as a boot device. 17.3. Configuring the root user and creating local accounts 17.3.1. Configuring a root password You must configure a root password to finish the installation process and to log in to the administrator (also known as superuser or root) account that is used for system administration tasks. These tasks include installing and updating software packages and changing system-wide configuration such as network and firewall settings, storage options, and adding or modifying users, groups and file permissions. To gain root privileges to the installed systems, you can either use a root account or create a user account with administrative privileges (member of the wheel group). The root account is always created during the installation. Switch to the administrator account only when you need to perform a task that requires administrator access. Warning The root account has complete control over the system. If unauthorized personnel gain access to the account, they can access or delete users' personal files. Procedure From the Installation Summary window, select User Settings > Root Password . The Root Password window opens. Type your password in the Root Password field. The requirements for creating a strong root password are: Must be at least eight characters long May contain numbers, letters (upper and lower case) and symbols Is case-sensitive Type the same password in the Confirm field. Click Done to confirm your root password and return to the Installation Summary window. If you proceed with a weak password, you must click Done twice. 17.3.2. Creating a user account Create a user account to finish the installation. If you do not create a user account, you must log in to the system as root directly, which is not recommended. Procedure On the Installation Summary window, select User Settings > User Creation . The Create User window opens. Type the user account name in to the Full name field, for example: John Smith. Type the username in to the User name field, for example: jsmith. The User name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the Full name . Select the Make this user administrator check box if the user requires administrative rights (the installation program adds the user to the wheel group ). An administrator user can use the sudo command to perform tasks that are only available to root using the user password, instead of the root password. This may be more convenient, but it can also cause a security risk. Select the Require a password to use this account check box. If you give administrator privileges to a user, ensure the account is password protected. Never give a user administrator privileges without assigning a password to the account. Type a password into the Password field. Type the same password into the Confirm password field. Click Done to apply the changes and return to the Installation Summary window. 17.3.3. Editing advanced user settings This procedure describes how to edit the default settings for the user account in the Advanced User Configuration dialog box. Procedure On the Create User window, click Advanced . Edit the details in the Home directory field, if required. The field is populated by default with /home/ username . In the User and Groups IDs section you can: Select the Specify a user ID manually check box and use + or - to enter the required value. The default value is 1000. User IDs (UIDs) 0-999 are reserved by the system so they cannot be assigned to a user. Select the Specify a group ID manually check box and use + or - to enter the required value. The default group name is the same as the user name, and the default Group ID (GID) is 1000. GIDs 0-999 are reserved by the system so they can not be assigned to a user group. Specify additional groups as a comma-separated list in the Group Membership field. Groups that do not already exist are created; you can specify custom GIDs for additional groups in parentheses. If you do not specify a custom GID for a new group, the new group receives a GID automatically. The user account created always has one default group membership (the user's default group with an ID set in the Specify a group ID manually field). Click Save Changes to apply the updates and return to the Create User window. 17.4. Configuring manual partitioning You can use manual partitioning to configure your disk partitions and mount points and define the file system that Red Hat Enterprise Linux is installed on. Before installation, you should consider whether you want to use partitioned or unpartitioned disk devices. For more information about the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM, see the Red Hat Knowledgebase solution advantages and disadvantages to using partitioning on LUNs . You have different partitioning and storage options available, including Standard Partitions , LVM , and LVM thin provisioning . These options provide various benefits and configurations for managing your system's storage effectively. Standard partition A standard partition contains a file system or swap space. Standard partitions are most commonly used for /boot and the BIOS Boot and EFI System partitions . You can use the LVM logical volumes in most other uses. LVM Choosing LVM (or Logical Volume Management) as the device type creates an LVM logical volume. LVM improves performance when using physical disks, and it allows for advanced setups such as using multiple physical disks for one mount point, and setting up software RAID for increased performance, reliability, or both. LVM thin provisioning Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can dynamically expand the pool when needed for cost-effective allocation of storage space. An installation of Red Hat Enterprise Linux requires a minimum of one partition but uses at least the following partitions or volumes: / , /home , /boot , and swap . You can also create additional partitions and volumes as you require. To prevent data loss it is recommended that you back up your data before proceeding. If you are upgrading or creating a dual-boot system, you should back up any data you want to keep on your storage devices. 17.4.1. Recommended partitioning scheme Create separate file systems at the following mount points. However, if required, you can also create the file systems at /usr , /var , and /tmp mount points. /boot / (root) /home swap /boot/efi PReP This partition scheme is recommended for bare metal deployments and it does not apply to virtual and cloud deployments. /boot partition - recommended size at least 1 GiB The partition mounted on /boot contains the operating system kernel, which allows your system to boot Red Hat Enterprise Linux 8, along with files used during the bootstrap process. Due to the limitations of most firmwares, create a small partition to hold these. In most scenarios, a 1 GiB boot partition is adequate. Unlike other mount points, using an LVM volume for /boot is not possible - /boot must be located on a separate disk partition. If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In such a case, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. Warning Normally, the /boot partition is created automatically by the installation program. However, if the / (root) partition is larger than 2 TiB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TiB to boot the machine successfully. Ensure the /boot partition is located within the first 2 TB of the disk while manual partitioning. Placing the /boot partition beyond the 2 TB boundary might result in a successful installation, but the system fails to boot because BIOS cannot read the /boot partition beyond this limit. root - recommended size of 10 GiB This is where " / ", or the root directory, is located. The root directory is the top-level of the directory structure. By default, all files are written to this file system unless a different file system is mounted in the path being written to, for example, /boot or /home . While a 5 GiB root file system allows you to install a minimal installation, it is recommended to allocate at least 10 GiB so that you can install as many package groups as you want. Do not confuse the / directory with the /root directory. The /root directory is the home directory of the root user. The /root directory is sometimes referred to as slash root to distinguish it from the root directory. /home - recommended size at least 1 GiB To store user data separately from system data, create a dedicated file system for the /home directory. Base the file system size on the amount of data that is stored locally, number of users, and so on. You can upgrade or reinstall Red Hat Enterprise Linux 8 without erasing user data files. If you select automatic partitioning, it is recommended to have at least 55 GiB of disk space available for the installation, to ensure that the /home file system is created. swap partition - recommended size at least 1 GiB Swap file systems support virtual memory; data is written to a swap file system when there is not enough RAM to store the data your system is processing. Swap size is a function of system memory workload, not total system memory and therefore is not equal to the total system memory size. It is important to analyze what applications a system will be running and the load those applications will serve in order to determine the system memory workload. Application providers and developers can provide guidance. When the system runs out of swap space, the kernel terminates processes as the system RAM memory is exhausted. Configuring too much swap space results in storage devices being allocated but idle and is a poor use of resources. Too much swap space can also hide memory leaks. The maximum size for a swap partition and other additional information can be found in the mkswap(8) manual page. The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and if you want sufficient memory for your system to hibernate. If you let the installation program partition your system automatically, the swap partition size is established using these guidelines. Automatic partitioning setup assumes hibernation is not in use. The maximum size of the swap partition is limited to 10 percent of the total size of the disk, and the installation program cannot create swap partitions more than 1TiB. To set up enough swap space to allow for hibernation, or if you want to set the swap partition size to more than 10 percent of the system's storage space, or more than 1TiB, you must edit the partitioning layout manually. Table 17.1. Recommended system swap space Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation Less than 2 GiB 2 times the amount of RAM 3 times the amount of RAM 2 GiB - 8 GiB Equal to the amount of RAM 2 times the amount of RAM 8 GiB - 64 GiB 4 GiB to 0.5 times the amount of RAM 1.5 times the amount of RAM More than 64 GiB Workload dependent (at least 4GiB) Hibernation not recommended /boot/efi partition - recommended size of 200 MiB UEFI-based AMD64, Intel 64, and 64-bit ARM require a 200 MiB EFI system partition. The recommended minimum size is 200 MiB, the default size is 600 MiB, and the maximum size is 600 MiB. BIOS systems do not require an EFI system partition. At the border between each range, for example, a system with 2 GiB, 8 GiB, or 64 GiB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space can lead to better performance. Distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance. Many systems have more partitions and volumes than the minimum required. Choose partitions based on your particular system needs. If you are unsure about configuring partitions, accept the automatic default partition layout provided by the installation program. Note Only assign storage capacity to those partitions you require immediately. You can allocate free space at any time, to meet needs as they occur. PReP boot partition - recommended size of 4 to 8 MiB When installing Red Hat Enterprise Linux on IBM Power System servers, the first partition of the disk should include a PReP boot partition. This contains the GRUB boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 17.4.2. Supported hardware storage It is important to understand how storage technologies are configured and how support for them may have changed between major versions of Red Hat Enterprise Linux. Hardware RAID Any RAID functions provided by the mainboard of your computer, or attached controller cards, need to be configured before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. Software RAID On systems with more than one disk, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than the dedicated hardware. Note When a pre-existing RAID array's member devices are all unpartitioned disks/drives, the installation program treats the array as a disk and there is no method to remove the array. USB Disks You can connect and configure external USB storage after installation. Most devices are recognized by the kernel, but some devices may not be recognized. If it is not a requirement to configure these disks during installation, disconnect them to avoid potential problems. NVDIMM devices To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied: Version of Red Hat Enterprise Linux is 7.6 or later. The architecture of the system is Intel 64 or AMD64. The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode. The device must be supported by the nd_pmem driver. Booting from an NVDIMM device is possible under the following additional conditions: The system uses UEFI. The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The device must be made available under a namespace. To take advantage of the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. Note The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. Considerations for Intel BIOS RAID Sets Red Hat Enterprise Linux uses mdraid for installing on Intel BIOS RAID sets. These sets are automatically detected during the boot process and their device node paths can change across several booting processes. Replace device node paths (such as /dev/sda ) with file system labels or device UUIDs. You can find the file system labels and device UUIDs using the blkid command. 17.4.3. Starting manual partitioning You can partition the disks based on your requirements by using manual partitioning. Prerequisites The Installation Summary screen is open. All disks are available to the installation program. Procedure Select disks for installation: Click Installation Destination to open the Installation Destination window. Select the disks that you require for installation by clicking the corresponding icon. A selected disk has a check-mark displayed on it. Under Storage Configuration , select the Custom radio-button. Optional: To enable storage encryption with LUKS, select the Encrypt my data check box. Click Done . If you selected to encrypt the storage, a dialog box for entering a disk encryption passphrase opens. Type in the LUKS passphrase: Enter the passphrase in the two text fields. To switch keyboard layout, use the keyboard icon. Warning In the dialog box for entering the passphrase, you cannot change the keyboard layout. Select the English keyboard layout to enter the passphrase in the installation program. Click Save Passphrase . The Manual Partitioning window opens. Detected mount points are listed in the left-hand pane. The mount points are organized by detected operating system installations. As a result, some file systems may be displayed multiple times if a partition is shared among several installations. Select the mount points in the left pane; the options that can be customized are displayed in the right pane. Optional: If your system contains existing file systems, ensure that enough space is available for the installation. To remove any partitions, select them in the list and click the - button. The dialog has a check box that you can use to remove all other partitions used by the system to which the deleted partition belongs. Optional: If there are no existing partitions and you want to create a set of partitions as a starting point, select your preferred partitioning scheme from the left pane (default for Red Hat Enterprise Linux is LVM) and click the Click here to create them automatically link. Note A /boot partition, a / (root) volume, and a swap volume proportional to the size of the available storage are created and listed in the left pane. These are the file systems for a typical installation, but you can add additional file systems and mount points. Click Done to confirm any changes and return to the Installation Summary window. 17.4.4. Supported file systems When configuring manual partitioning, you can optimize performance, ensure compatibility, and effectively manage disk space by utilizing the various file systems and partition types available in Red Hat Enterprise Linux. xfs XFS is a highly scalable, high-performance file system that supports file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes), and directory structures containing tens of millions of entries. XFS also supports metadata journaling, which facilitates quicker crash recovery. The maximum supported size of a single XFS file system is 500 TB. XFS is the default file system on Red Hat Enterprise Linux. The XFS filesystem cannot be shrunk to get free space. ext4 The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The maximum supported size of a single ext4 file system is 50 TB. ext3 The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces the time spent recovering a file system after it terminates unexpectedly, as there is no need to check the file system for metadata consistency by running the fsck utility every time. ext2 An ext2 file system supports standard Unix file types, including regular files, directories, or symbolic links. It provides the ability to assign long file names, up to 255 characters. swap Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. vfat The VFAT file system is a Linux file system that is compatible with Microsoft Windows long file names on the FAT file system. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr and so on. BIOS Boot A very small partition required for booting from a device with a GUID partition table (GPT) on BIOS systems and UEFI systems in BIOS compatibility mode. EFI System Partition A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system. PReP This small boot partition is located on the first partition of the disk. The PReP boot partition contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 17.4.5. Adding a mount point file system You can add multiple mount point file systems. You can use any of the file systems and partition types available, such as XFS, ext4, ext3, ext2, swap, VFAT, and specific partitions like BIOS Boot, EFI System Partition, and PReP to effectively configure your system's storage. Prerequisites You have planned your partitions. Ensure you haven't specified mount points at paths with symbolic links, such as /var/mail , /usr/tmp , /lib , /sbin , /lib64 , and /bin . The payload, including RPM packages, depends on creating symbolic links to specific directories. Procedure Click + to create a new mount point file system. The Add a New Mount Point dialog opens. Select one of the preset paths from the Mount Point drop-down menu or type your own; for example, select / for the root partition or /boot for the boot partition. Enter the size of the file system in to the Desired Capacity field; for example, 2GiB . If you do not specify a value in Desired Capacity , or if you specify a size bigger than available space, then all remaining free space is used. Click Add mount point to create the partition and return to the Manual Partitioning window. 17.4.6. Configuring storage for a mount point file system You can set the partitioning scheme for each mount point that was created manually. The available options are Standard Partition , LVM , and LVM Thin Provisioning . Btfrs support has been removed in Red Hat Enterprise Linux 8. Note The /boot partition is always located on a standard partition, regardless of the value selected. Procedure To change the devices that a single non-LVM mount point should be located on, select the required mount point from the left-hand pane. Under the Device(s) heading, click Modify . The Configure Mount Point dialog opens. Select one or more devices and click Select to confirm your selection and return to the Manual Partitioning window. Click Update Settings to apply the changes. In the lower left-hand side of the Manual Partitioning window, click the storage device selected link to open the Selected Disks dialog and review disk information. Optional: Click the Rescan button (circular arrow button) to refresh all local disks and partitions; this is only required after performing advanced partition configuration outside the installation program. Clicking the Rescan Disks button resets all configuration changes made in the installation program. 17.4.7. Customizing a mount point file system You can customize a partition or volume if you want to set specific settings. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex as these directories contain critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system is unable to boot, or hangs with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories below them. For example, a separate partition for /var/www works successfully. Procedure From the left pane, select the mount point. Figure 17.1. Customizing Partitions From the right-hand pane, you can customize the following options: Enter the file system mount point into the Mount Point field. For example, if a file system is the root file system, enter / ; enter /boot for the /boot file system, and so on. For a swap file system, do not set the mount point as setting the file system type to swap is sufficient. Enter the size of the file system in the Desired Capacity field. You can use common size units such as KiB or GiB. The default is MiB if you do not set any other unit. Select the device type that you require from the drop-down Device Type menu: Standard Partition , LVM , or LVM Thin Provisioning . Note RAID is available only if two or more disks are selected for partitioning. If you choose RAID , you can also set the RAID Level . Similarly, if you select LVM , you can specify the Volume Group . Select the Encrypt check box to encrypt the partition or volume. You must set a password later in the installation program. The LUKS Version drop-down menu is displayed. Select the LUKS version that you require from the drop-down menu. Select the appropriate file system type for this partition or volume from the File system drop-down menu. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr , and so on. Select the Reformat check box to format an existing partition, or clear the Reformat check box to retain your data. The newly-created partitions and volumes must be reformatted, and the check box cannot be cleared. Type a label for the partition in the Label field. Use labels to easily recognize and address individual partitions. Type a name in the Name field. The standard partitions are named automatically when they are created and you cannot edit the names of standard partitions. For example, you cannot edit the /boot name sda1 . Click Update Settings to apply your changes and if required, select another partition to customize. Changes are not applied until you click Begin Installation from the Installation Summary window. Optional: Click Reset All to discard your partition changes. Click Done when you have created and customized all file systems and mount points. If you choose to encrypt a file system, you are prompted to create a passphrase. A Summary of Changes dialog box opens, displaying a summary of all storage actions for the installation program. Click Accept Changes to apply the changes and return to the Installation Summary window. 17.4.8. Preserving the /home directory In a Red Hat Enterprise Linux 8 graphical installation, you can preserve the /home directory that was used on your RHEL 7 system. Preserving /home is only possible if the /home directory is located on a separate /home partition on your RHEL 7 system. Preserving the /home directory that includes various configuration settings, makes it possible that the GNOME Shell environment on the new Red Hat Enterprise Linux 8 system is set in the same way as it was on your RHEL 7 system. Note that this applies only for users on Red Hat Enterprise Linux 8 with the same user name and ID as on the RHEL 7 system. Prerequisites You have RHEL 7 installed on your computer. The /home directory is located on a separate /home partition on your RHEL 7 system. The Red Hat Enterprise Linux 8 Installation Summary window is open. Procedure Click Installation Destination to open the Installation Destination window. Under Storage Configuration , select the Custom radio button. Click Done . Click Done , the Manual Partitioning window opens. Choose the /home partition, fill in /home under Mount Point: and clear the Reformat check box. Figure 17.2. Ensuring that /home is not formatted Optional: You can also customize various aspects of the /home partition required for your Red Hat Enterprise Linux 8 system as described in Customizing a mount point file system . However, to preserve /home from your RHEL 7 system, it is necessary to clear the Reformat check box. After you customized all partitions according to your requirements, click Done . The Summary of changes dialog box opens. Verify that the Summary of changes dialog box does not show any change for /home . This means that the /home partition is preserved. Click Accept Changes to apply the changes, and return to the Installation Summary window. 17.4.9. Creating a software RAID during the installation Redundant Arrays of Independent Disks (RAID) devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so that the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two disks, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. To optimize your system's storage performance and reliability, RHEL supports software RAID 0 , RAID 1 , RAID 4 , RAID 5 , RAID 6 , and RAID 10 types with LVM and LVM Thin Provisioning to set up storage on the installed system. Note On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites You have selected two or more disks for installation before RAID configuration options are visible. Depending on the RAID type you want to create, at least two disks are required. You have created a mount point. By configuring a mount point, you can configure the RAID device. You have selected the Custom radio button on the Installation Destination window. Procedure From the left pane of the Manual Partitioning window, select the required partition. Under the Device(s) section, click Modify . The Configure Mount Point dialog box opens. Select the disks that you want to include in the RAID device and click Select . Click the Device Type drop-down menu and select RAID . Click the File System drop-down menu and select your preferred file system type. Click the RAID Level drop-down menu and select your preferred level of RAID. Click Update Settings to save your changes. Click Done to apply the settings to return to the Installation Summary window. Additional resources Creating a RAID LV with DM integrity Managing RAID 17.4.10. Creating an LVM logical volume Logical Volume Manager (LVM) presents a simple logical view of underlying physical storage space, such as disks or LUNs. Partitions on physical storage are represented as physical volumes that you can group together into volume groups. You can divide each volume group into multiple logical volumes, each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. Important LVM configuration is available only in the graphical installation program. During text-mode installation, LVM configuration is not available. To create an LVM configuration, press Ctrl + Alt + F2 to use a shell prompt in a different virtual console. You can run vgcreate and lvm commands in this shell. To return to the text-mode installation, press Ctrl + Alt + F1 . Procedure From the Manual Partitioning window, create a new mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Select LVM in the drop-down menu. The Volume Group drop-down menu is displayed with the newly-created volume group name. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information about Kickstart, see the Automatically installing RHEL . Click Done to return to the Installation Summary window. Additional resources Configuring and managing logical volumes 17.4.11. Configuring an LVM logical volume You can configure a newly-created LVM logical volume based on your requirements. Warning Placing the /boot partition on an LVM volume is not supported. Procedure From the Manual Partitioning window, create a mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Click the Device Type drop-down menu and select LVM . The Volume Group drop-down menu is displayed with the newly-created volume group name. Click Modify to configure the newly-created volume group. The Configure Volume Group dialog box opens. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information, see the Automatically installing RHEL document. Optional: From the RAID Level drop-down menu, select the RAID level that you require. The available RAID levels are the same as with actual RAID devices. Select the Encrypt check box to mark the volume group for encryption. From the Size policy drop-down menu, select any of the following size policies for the volume group: The available policy options are: Automatic The size of the volume group is set automatically so that it is large enough to contain the configured logical volumes. This is optimal if you do not need free space within the volume group. As large as possible The volume group is created with maximum size, regardless of the size of the configured logical volumes it contains. This is optimal if you plan to keep most of your data on LVM and later need to increase the size of some existing logical volumes, or if you need to create additional logical volumes within this group. Fixed You can set an exact size of the volume group. Any configured logical volumes must then fit within this fixed size. This is useful if you know exactly how large you need the volume group to be. Click Save to apply the settings and return to the Manual Partitioning window. Click Update Settings to save your changes. Click Done to return to the Installation Summary window. 17.4.12. Advice on partitions There is no best way to partition every system; the optimal setup depends on how you plan to use the system being installed. However, the following tips may help you find the optimal layout for your needs: Create partitions that have specific requirements first, for example, if a particular partition must be on a specific disk. Consider encrypting any partitions and volumes which might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /home partition, which contains user data. In some cases, creating separate mount points for directories other than / , /boot and /home may be useful; for example, on a server running a MySQL database, having a separate mount point for /var/lib/mysql allows you to preserve the database during a re-installation without having to restore it from backup afterward. However, having unnecessary separate mount points will make storage administration more difficult. Some special restrictions apply to certain directories with regards to which partitioning layouts can be placed. Notably, the /boot directory must always be on a physical partition (not on an LVM volume). If you are new to Linux, consider reviewing the Linux Filesystem Hierarchy Standard for information about various system directories and their contents. Each kernel requires approximately: 60MiB (initrd 34MiB, 11MiB vmlinuz, and 5MiB System.map) For rescue mode: 100MiB (initrd 76MiB, 11MiB vmlinuz, and 5MiB System map) When kdump is enabled in system it will take approximately another 40MiB (another initrd with 33MiB) The default partition size of 1 GiB for /boot should suffice for most common use cases. However, increase the size of this partition if you are planning on retaining multiple kernel releases or errata kernels. The /var directory holds content for a number of applications, including the Apache web server, and is used by the YUM package manager to temporarily store downloaded package updates. Make sure that the partition or volume containing /var has at least 5 GiB. The /usr directory holds the majority of software on a typical Red Hat Enterprise Linux installation. The partition or volume containing this directory should therefore be at least 5 GiB for minimal installations, and at least 10 GiB for installations with a graphical environment. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories under them. For example, a separate partition for /var/www works without issues. Important Some security policies require the separation of /usr and /var , even though it makes administration more complex. Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other volumes. You can also select the LVM Thin Provisioning device type for the partition to have the unused space handled automatically by the volume. The size of an XFS file system cannot be reduced - if you need to make a partition or volume with this file system smaller, you must back up your data, destroy the file system, and create a new, smaller one in its place. Therefore, if you plan to alter your partitioning layout later, you should use the ext4 file system instead. Use Logical Volume Manager (LVM) if you anticipate expanding your storage by adding more disks or expanding virtual machine disks after the installation. With LVM, you can create physical volumes on the new drives, and then assign them to any volume group and logical volume as you see fit - for example, you can easily expand your system's /home (or any other directory residing on a logical volume). Creating a BIOS Boot partition or an EFI System Partition may be necessary, depending on your system's firmware, boot drive size, and boot drive disk label. Note that you cannot create a BIOS Boot or EFI System Partition in graphical installation if your system does not require one - in that case, they are hidden from the menu. If you need to make any changes to your storage configuration after the installation, Red Hat Enterprise Linux repositories offer several different tools which can help you do this. If you prefer a command-line tool, try system-storage-manager . Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 17.5. Selecting the base environment and additional software Use the Software Selection window to select the software packages that you require. The packages are organized by Base Environment and Additional Software. Base Environment contains predefined packages. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom operating system, Virtualization Host. The availability is dependent on the installation ISO image that is used as the installation source. Additional Software for Selected Environment contains additional software packages for the base environment. You can select multiple software packages. Use a predefined environment and additional software to customize your system. However, in a standard installation, you cannot select individual packages to install. To view the packages contained in a specific environment, see the repository /repodata/*-comps- repository . architecture .xml file on your installation source media (DVD, CD, USB). The XML file contains details of the packages installed as part of a base environment. Available environments are marked by the <environment> tag, and additional software packages are marked by the <group> tag. If you are unsure about which packages to install, select the Minimal Install base environment. Minimal install installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. After the system finishes installing and you log in for the first time, you can use the YUM package manager to install additional software. For more information about YUM package manager, see the Configuring basic system settings document. Note Use the yum group list command from any RHEL 8 system to view the list of packages being installed on the system as a part of software selection. For more information, see Configuring basic system settings . If you need to control which packages are installed, you can use a Kickstart file and define the packages in the %packages section. Prerequisites You have configured the installation source. The installation program has downloaded package metadata. The Installation Summary window is open. Procedure From the Installation Summary window, click Software Selection . The Software Selection window opens. From the Base Environment pane, select a base environment. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom Operating System, Virtualization Host. By default, the Server with GUI base environment is selected. Figure 17.3. Red Hat Enterprise Linux Software Selection From the Additional Software for Selected Environment pane, select one or more options. Click Done to apply the settings and return to graphical installations. 17.6. Optional: Configuring the network and host name Use the Network and Host name window to configure network interfaces. Options that you select here are available both during the installation for tasks such as downloading packages from a remote location, and on the installed system. Follow the steps in this procedure to configure your network and host name. Procedure From the Installation Summary window, click Network and Host Name . From the list in the left-hand pane, select an interface. The details are displayed in the right-hand pane. Toggle the ON/OFF switch to enable or disable the selected interface. You cannot add or remove interfaces manually. Click + to add a virtual network interface, which can be either: Team, Bond, Bridge, or VLAN. Click - to remove a virtual interface. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration for an existing interface (both virtual and physical). Type a host name for your system in the Host Name field. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this system, specify only the short host name. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require a short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . Click Apply to apply the host name to the installer environment. Alternatively, in the Network and Hostname window, you can choose the Wireless option. Click Select network in the right-hand pane to select your wifi connection, enter the password if required, and click Done . Additional resources For more information about network device naming standards, see Configuring and managing networking . 17.6.1. Adding a virtual network interface You can add a virtual network interface. Procedure From the Network & Host name window, click the + button to add a virtual network interface. The Add a device dialog opens. Select one of the four available types of virtual interfaces: Bond : NIC ( Network Interface Controller ) Bonding, a method to bind multiple physical network interfaces together into a single bonded channel. Bridge : Represents NIC Bridging, a method to connect multiple separate networks into one aggregate network. Team : NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. Vlan ( Virtual LAN ): A method to create multiple distinct broadcast domains which are mutually isolated. Select the interface type and click Add . An editing interface dialog box opens, allowing you to edit any available settings for your chosen interface type. For more information, see Editing network interface . Click Save to confirm the virtual interface settings and return to the Network & Host name window. Optional: To change the settings of a virtual interface, select the interface and click Configure . 17.6.2. Editing network interface configuration You can edit the configuration of a typical wired connection used during installation. Configuration of other types of networks is broadly similar, although the specific configuration parameters might be different. Note On 64-bit IBM Z, you cannot add a new connection as the network subchannels need to be grouped and set online beforehand, and this is currently done only in the booting phase. Procedure To configure a network connection manually, select the interface from the Network and Host name window and click Configure . An editing dialog specific to the selected interface opens. The options present depend on the connection type - the available options are slightly different depending on whether the connection type is a physical interface (wired or wireless network interface controller) or a virtual interface (Bond, Bridge, Team, or Vlan) that was previously configured in Adding a virtual interface . 17.6.3. Enabling or Disabling the Interface Connection You can enable or disable specific interface connections. Procedure Click the General tab. Select the Connect automatically with priority check box to enable connection by default. Keep the default priority setting at 0 . Optional: Enable or disable all users on the system from connecting to this network by using the All users may connect to this network option. If you disable this option, only root will be able to connect to this network. Important When enabled on a wired connection, the system automatically connects during startup or reboot. On a wireless connection, the interface attempts to connect to any known wireless networks in range. For further information about NetworkManager, including the nm-connection-editor tool, see the Configuring and managing networking document. Click Save to apply the changes and return to the Network and Host name window. It is not possible to only allow a specific user other than root to use this interface, as no other users are created at this point during the installation. If you need a connection for a different user, you must configure it after the installation. 17.6.4. Setting up Static IPv4 or IPv6 Settings By default, both IPv4 and IPv6 are set to automatic configuration depending on current network settings. This means that addresses such as the local IP address, DNS address, and other settings are detected automatically when the interface connects to a network. In many cases, this is sufficient, but you can also provide static configuration in the IPv4 Settings and IPv6 Settings tabs. Complete the following steps to configure IPv4 or IPv6 settings: Procedure To set static network configuration, navigate to one of the IPv Settings tabs and from the Method drop-down menu, select a method other than Automatic , for example, Manual . The Addresses pane is enabled. Optional: In the IPv6 Settings tab, you can also set the method to Ignore to disable IPv6 on this interface. Click Add and enter your address settings. Type the IP addresses in the Additional DNS servers field; it accepts one or more IP addresses of DNS servers, for example, 10.0.0.1,10.0.0.8 . Select the Require IPv X addressing for this connection to complete check box. Selecting this option in the IPv4 Settings or IPv6 Settings tabs allow this connection only if IPv4 or IPv6 was successful. If this option remains disabled for both IPv4 and IPv6, the interface is able to connect if configuration succeeds on either IP protocol. Click Save to apply the changes and return to the Network & Host name window. 17.6.5. Configuring Routes You can control the access of specific connections by configuring routes. Procedure In the IPv4 Settings and IPv6 Settings tabs, click Routes to configure routing settings for a specific IP protocol on an interface. An editing routes dialog specific to the interface opens. Click Add to add a route. Select the Ignore automatically obtained routes check box to configure at least one static route and to disable all routes not specifically configured. Select the Use this connection only for resources on its network check box to prevent the connection from becoming the default route. This option can be selected even if you did not configure any static routes. This route is used only to access certain resources, such as intranet pages that require a local or VPN connection. Another (default) route is used for publicly available resources. Unlike the additional routes configured, this setting is transferred to the installed system. This option is useful only when you configure more than one interface. Click OK to save your settings and return to the editing routes dialog that is specific to the interface. Click Save to apply the settings and return to the Network and Host Name window. 17.7. Optional: Configuring the keyboard layout You can configure the keyboard layout from the Installation Summary screen. Important If you use a layout that cannot accept Latin characters, such as Russian , add the English (United States) layout and configure a keyboard combination to switch between the two layouts. If you select a layout that does not have Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This might prevent you from completing the installation. Procedure From the Installation Summary window, click Keyboard . Click + to open the Add a Keyboard Layout window to change to a different layout. Select a layout by browsing the list or use the Search field. Select the required layout and click Add . The new layout appears under the default layout. Click Options to optionally configure a keyboard switch that you can use to cycle between available layouts. The Layout Switching Options window opens. To configure key combinations for switching, select one or more key combinations and click OK to confirm your selection. Optional: When you select a layout, click the Keyboard button to open a new dialog box displaying a visual representation of the selected layout. Click Done to apply the settings and return to graphical installations. 17.8. Optional: Configuring the language support You can change the language settings from the Installation Summary screen. Procedure From the Installation Summary window, click Language Support . The Language Support window opens. The left pane lists the available language groups. If at least one language from a group is configured, a check mark is displayed and the supported language is highlighted. From the left pane, click a group to select additional languages, and from the right pane, select regional options. Repeat this process for all the languages that you want to configure. Optional: Search the language group by typing in the text box, if required. Click Done to apply the settings and return to graphical installations. 17.9. Optional: Configuring the date and time-related settings You can configure the date and time-related settings from the Installation Summary screen. Procedure From the Installation Summary window, click Time & Date . The Time & Date window opens. The list of cities and regions come from the Time Zone Database ( tzdata ) public domain that is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat can not add cities or regions to this database. You can find more information at the IANA official website . From the Region drop-down menu, select a region. Select Etc as your region to configure a time zone relative to Greenwich Mean Time (GMT) without setting your location to a specific region. From the City drop-down menu, select the city, or the city closest to your location in the same time zone. Toggle the Network Time switch to enable or disable network time synchronization using the Network Time Protocol (NTP). Enabling the Network Time switch keeps your system time correct as long as the system can access the internet. By default, one NTP pool is configured. Optional: Use the gear wheel button to the Network Time switch to add a new NTP, or disable or remove the default options. Click Done to apply the settings and return to graphical installations. Optional: Disable the network time synchronization to activate controls at the bottom of the page to set time and date manually. 17.10. Optional: Subscribing the system and activating Red Hat Insights Red Hat Insights is a Software-as-a-Service (SaaS) offering that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to security, performance and stability across physical, virtual and cloud environments, and container deployments. By registering your RHEL system in Red Hat Insights, you gain access to predictive analytics, security alerts, and performance optimization tools, enabling you to maintain a secure, efficient, and stable IT environment. You can register to Red Hat by using either your Red Hat account or your activation key details. You can connect your system to Red hat Insights by using the Connect to Red Hat option. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Select Account or Activation Key . If you select Account , enter your Red Hat Customer Portal username and password details. If you select Activation Key , enter your organization ID and activation key. You can enter more than one activation key, separated by a comma, as long as the activation keys are registered to your subscription. Select the Set System Purpose check box. If the account has Simple content access mode enabled, setting the system purpose values is still important for accurate reporting of consumption in the subscription services. If your account is in the entitlement mode, system purpose enables the entitlement server to determine and automatically attach the most appropriate subscription to satisfy the intended use of the Red Hat Enterprise Linux 8 system. Select the required Role , SLA , and Usage from the corresponding drop-down lists. The Connect to Red Hat Insights check box is enabled by default. Clear the check box if you do not want to connect to Red Hat Insights. Optional: Expand Options . Select the Use HTTP proxy check box if your network environment only allows external Internet access or access to content servers through an HTTP proxy. Clear the Use HTTP proxy check box if an HTTP proxy is not used. If you are running Satellite Server or performing internal testing, select the Custom Server URL and Custom base URL check boxes and enter the required details. Important The Custom Server URL field does not require the HTTP protocol, for example nameofhost.com . However, the Custom base URL field requires the HTTP protocol. To change the Custom base URL after registration, you must unregister, provide the new details, and then re-register. Click Register to register the system. When the system is successfully registered and subscriptions are attached, the Connect to Red Hat window displays the attached subscription details. Depending on the amount of subscriptions, the registration and attachment process might take up to a minute to complete. Click Done to return to the Installation Summary window. A Registered message is displayed under Connect to Red Hat . Additional resources About Red Hat Insights 17.11. Optional: Using network-based repositories for the installation You can configure an installation source from either auto-detected installation media, Red Hat CDN, or the network. When the Installation Summary window first opens, the installation program attempts to configure an installation source based on the type of media that was used to boot the system. The full Red Hat Enterprise Linux Server DVD configures the source as local media. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have created bootable installation media. The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Source . The Installation Source window opens. Review the Auto-detected installation media section to verify the details. This option is selected by default if you started the installation program from media containing an installation source, for example, a DVD. Click Verify to check the media integrity. Review the Additional repositories section and note that the AppStream check box is selected by default. The BaseOS and AppStream repositories are installed as part of the full installation image. Do not disable the AppStream repository check box if you want a full Red Hat Enterprise Linux 8 installation. Optional: Select the Red Hat CDN option to register your system, attach RHEL subscriptions, and install RHEL from the Red Hat Content Delivery Network (CDN). Optional: Select the On the network option to download and install packages from a network location instead of local media. This option is available only when a network connection is active. See Configuring network and host name options for information about how to configure network connections in the GUI. Note If you do not want to download and install additional repositories from a network location, proceed to Configuring software selection . Select the On the network drop-down menu to specify the protocol for downloading packages. This setting depends on the server that you want to use. Type the server address (without the protocol) into the address field. If you choose NFS, a second input field opens where you can specify custom NFS mount options . This field accepts options listed in the nfs(5) man page on your system. When selecting an NFS installation source, specify the address with a colon ( : ) character separating the host name from the path. For example, server.example.com:/path/to/directory . The following steps are optional and are only required if you use a proxy for network access. Click Proxy setup to configure a proxy for an HTTP or HTTPS source. Select the Enable HTTP proxy check box and type the URL into the Proxy Host field. Select the Use Authentication check box if the proxy server requires authentication. Type in your user name and password. Click OK to finish the configuration and exit the Proxy Setup... dialog box. Note If your HTTP or HTTPS URL refers to a repository mirror, select the required option from the URL type drop-down list. All environments and additional software packages are available for selection when you finish configuring the sources. Click + to add a repository. Click - to delete a repository. Click the arrow icon to revert the current entries to the setting when you opened the Installation Source window. To activate or deactivate a repository, click the check box in the Enabled column for each entry in the list. You can name and configure your additional repository in the same way as the primary repository on the network. Click Done to apply the settings and return to the Installation Summary window. 17.12. Optional: Configuring Kdump kernel crash-dumping mechanism Kdump is a kernel crash-dumping mechanism. In the event of a system crash, Kdump captures the contents of the system memory at the moment of failure. This captured memory can be analyzed to find the cause of the crash. If Kdump is enabled, it must have a small portion of the system's memory (RAM) reserved to itself. This reserved memory is not accessible to the main kernel. Procedure From the Installation Summary window, click Kdump . The Kdump window opens. Select the Enable kdump check box. Select either the Automatic or Manual memory reservation setting. If you select Manual , enter the amount of memory (in megabytes) that you want to reserve in the Memory to be reserved field using the + and - buttons. The Usable System Memory readout below the reservation input field shows how much memory is accessible to your main system after reserving the amount of RAM that you select. Click Done to apply the settings and return to graphical installations. The amount of memory that you reserve is determined by your system architecture (AMD64 and Intel 64 have different requirements than IBM Power) as well as the total amount of system memory. In most cases, automatic reservation is satisfactory. Additional settings, such as the location where kernel crash dumps will be saved, can only be configured after the installation using either the system-config-kdump graphical interface, or manually in the /etc/kdump.conf configuration file. 17.13. Optional: Selecting a security profile You can apply security policy during your Red Hat Enterprise Linux 8 installation and configure it to use on your system before the first boot. 17.13.1. About security policy The Red Hat Enterprise Linux includes OpenSCAP suite to enable automated configuration of the system in alignment with a particular security policy. The policy is implemented using the Security Content Automation Protocol (SCAP) standard. The packages are available in the AppStream repository. However, by default, the installation and post-installation process does not enforce any policies and therefore does not involve any checks unless specifically configured. Applying a security policy is not a mandatory feature of the installation program. If you apply a security policy to the system, it is installed using restrictions defined in the profile that you selected. The openscap-scanner and scap-security-guide packages are added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. When you select a security policy, the Anaconda GUI installer requires the configuration to adhere to the policy's requirements. There might be conflicting package selections, as well as separate partitions defined. Only after all the requirements are met, you can start the installation. At the end of the installation process, the selected OpenSCAP security policy automatically hardens the system and scans it to verify compliance, saving the scan results to the /root/openscap_data directory on the installed system. By default, the installer uses the content of the scap-security-guide package bundled in the installation image. You can also load external content from an HTTP, HTTPS, or FTP server. 17.13.2. Configuring a security profile You can configure a security policy from the Installation Summary window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Security Profile . The Security Profile window opens. To enable security policies on the system, toggle the Apply security policy switch to ON . Select one of the profiles listed in the top pane. Click Select profile . Profile changes that you must apply before installation appear in the bottom pane. Click Change content to use a custom profile. A separate window opens allowing you to enter a URL for valid security content. Click Fetch to retrieve the URL. You can load custom profiles from an HTTP , HTTPS , or FTP server. Use the full address of the content including the protocol, such as http:// . A network connection must be active before you can load a custom profile. The installation program detects the content type automatically. Click Use SCAP Security Guide to return to the Security Profile window. Click Done to apply the settings and return to the Installation Summary window. 17.13.3. Profiles not compatible with Server with GUI Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 17.2. Profiles not compatible with Server with GUI Profile name Profile ID Justification Notes CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. DISA STIG for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. To install a RHEL system as a Server with GUI aligned with DISA STIG in RHEL version 8.4 and later, you can use the DISA STIG with GUI profile. 17.13.4. Deploying baseline-compliant RHEL systems using Kickstart You can deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites The scap-security-guide package is installed on your RHEL 8 system. Procedure Open the /usr/share/scap-security-guide/kickstart/ssg-rhel8-ospp-ks.cfg Kickstart file in an editor of your choice. Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit must be preserved, and you can only change the size of the partitions. Start a Kickstart installation as described in Performing an automated installation using Kickstart . Important Passwords in Kickstart files are not checked for OSPP requirements. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources OSCAP Anaconda Add-on Kickstart commands and options reference: %addon org_fedora_oscap 17.13.5. Additional resources scap-security-guide(8) - The manual page for the scap-security-guide project contains information about SCAP security profiles, including examples on how to utilize the provided benchmarks using the OpenSCAP utility. Red Hat Enterprise Linux security compliance information is available in the Security hardening document. | [
"oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/customizing-the-system-in-the-installer_rhel-installer |
Chapter 26. The web console | Chapter 26. The web console The following chapter contains the most notable changes to the web console between RHEL 8 and RHEL 9. 26.1. Changes to the RHEL web console Remote root login disabled on new installations of RHEL 9.2 and later Due to security reasons, on new installations of RHEL 9.2 and newer, it is not possible to connect to the web console from a remote machine as a root user. To enable the remote root login: As a root user, open the /etc/cockpit/disallowed-users file in a text editor. Remove the root user line from the file. Save your changes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_the-web-console_considerations-in-adopting-rhel-9 |
Chapter 8. Designing Synchronization | Chapter 8. Designing Synchronization An important factor to consider while conducting the site survey for an existing site ( Section 2.3, "Performing a Site Survey" ) is to include the structure and data types of Active Directory directory services. Through Windows Sync, an existing Windows directory service can be synchronized and integrated with the Directory Server, including creating, modifying, and deleting Windows accounts on the Directory Server or, oppositely, the Directory Server accounts on Windows. This provides an efficient and effective way to maintain directory information integrity across directory services. 8.1. Windows Synchronization Overview The synchronization process is analogous to the replication process: it is enabled by a plug-in and configured and initiated through a synchronization agreement, and a record of directory changes is maintained and updates are sent according to that log. There are two parts to the complete Windows Synchronization process: User and Group Sync. As with multi-supplier replication, user and group entries are synchronized through a plug-in, which is enabled by default. The same changelog that is used for multi-supplier replication is also used to send updates from the Directory Server to the Windows synchronization peer server as an LDAP operation. The server also performs LDAP search operations against its Windows server to synchronize changes made to Windows entries to the corresponding Directory Server entry. Password Sync. This application captures password changes for Windows users and relays those changes back to the Directory Server over LDAPS. It must be installed on the Active Directory machine. Figure 8.1. The Sync Process 8.1.1. Synchronization Agreements Synchronization is configured and controlled by one or more synchronization agreements . These are similar in purpose to replication agreements and contain a similar set of information, including the host name and port number for the Windows server and the subtrees being synchronized. The Directory Server connects to its peer Windows server using LDAP or LDAP over TLS to both send and receive updates. A single Windows subtree is synchronized with a single Directory Server subtree, and vice versa. Unlike replication, which connects databases , synchronization is between suffixes , parts of the directory tree structure. Therefore, when designing the directory tree, consider the Windows subtrees that should be synchronized with the Directory Server, and design or add corresponding Directory Server subtrees. The synchronized Windows and Directory Server suffixes are both specified in the synchronization agreement. All entries within the respective subtrees are available for synchronization, including entries that are not immediate children of the specified suffix. Note Any descendant container entries need to be created separately on the Windows server by an administrator; Windows Sync does not create container entries. 8.1.2. Changelogs The Directory Server maintains a changelog , a database that records modifications that have occurred. The changelog is used by Windows Sync to coordinate and send changes made to the Windows synchronization peer server. Changes to entries in the Windows server are found by using Active Directory's Dirsync search feature. Because there is no changelog on the Active Directory side, the Dirsync search is issued, by default, periodically every five minutes. Using Dirsync ensures that only those entries that have changed since the search are retrieved. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_synchronization |
9.7. Designing Access Control | 9.7. Designing Access Control After deciding on the authentication schemes to use to establish the identity of directory clients, decide how to use those schemes to protect the information contained in the directory. Access control can specify that certain clients have access to particular information, while other clients do not. Access control is defined using one or more access control lists (ACLs). The directory's ACLs consist of a series of one or more access control information (ACI) statements that either allow or deny permissions (such as read, write, search, and compare) to specified entries and their attributes. Using the ACL, permissions can be set at any level of the directory tree: The entire directory. A particular subtree of the directory. Specific entries in the directory. A specific set of entry attributes. Any entry that matches a given LDAP search filter. In addition, permissions can be set for a specific user, for all users belonging to a specific group, or for all users of the directory. Lastly, access can be defined for a network location such as an IP address (IPv4 or IPv6) or a DNS name. 9.7.1. About the ACI Format When designing the security policy, it is helpful to understand how ACIs are represented in the directory. It is also helpful to understand what permissions can be set in the directory. This section gives a brief overview of the ACI mechanism. For a complete description of the ACI format, see the Red Hat Directory Server Administration Guide . Directory ACIs use the following general form: target permission bind_rule The ACI variables are defined below: target . Specifies the entry (usually a subtree) that the ACI targets, the attribute it targets, or both. The target identifies the directory element that the ACI applies to. An ACI can target only one entry, but it can target multiple attributes. In addition, the target can contain an LDAP search filter. Permissions can be set for widely scattered entries that contain common attribute values. permission . Identifies the actual permission being set by this ACI. The permission variable states that the ACI is allowing or denying a specific type of directory access, such as read or search, to the specified target. bind rule . Identifies the bind DN or network location to which the permission applies. The bind rule may also specify an LDAP filter, and if that filter is evaluated to be true for the binding client application, then the ACI applies to the client application. ACIs can therefore be expressed as follows: "For the directory object target, allow or deny permission if bind_rule is true." permission and bind_rule are set as a pair, and there can be multiple permission - bind_rule pairs for every target. Multiple access controls can be effectively set for any given target. For example: A permission can be set to allow anyone binding as Babs Jensen to write to Babs Jensen's telephone number. The bind rule in this permission is the part that states "if you bind as Babs Jensen." The target is Babs Jensen's phone number, and the permission is write access. 9.7.1.1. Targets Decide which entry is targeted by every ACI created in the directory. Targeting a directory branch point entry includes that branch point and all of its child entries in the scope of the permission. If a target entry is not explicitly defined for the ACI, then the ACI is targeted to the directory entry that contains the ACI statement. Set the targetattr parameter to target one or more attributes. If the targetattr parameter is not set, no attributes are targeted. For further details, see the corresponding section in the Red Hat Directory Server Administration Guide . For every ACI, only one entry or only those entries that match a single LDAP search filter can be targeted. In addition to targeting entries, it is possible to target attributes on the entry; this applies the permission to only a subset of attribute values. Target sets of attributes by explicitly naming those attributes that are targeted or by explicitly naming the attributes that are not targeted by the ACI. Excluding attributes in the target sets a permission for all but a few attributes allowed by an object class structure. For further details, see the corresponding section in the Red Hat Directory Server Administration Guide . 9.7.1.2. Permissions Permissions can either allow or deny access. In general, avoid denying permissions (for the reasons explained in Section 9.7.2.2, "Allowing or Denying Access" ). Permissions can be any operation performed on the directory service: Permission Description Read Indicates whether directory data may be read. Write Indicates whether directory data may be changed or created. This permission also allows directory data to be deleted but not the entry itself. To delete an entire entry, the user must have delete permissions. Search Indicates whether the directory data can be searched. This differs from the read permission in that read allows directory data to be viewed if it is returned as part of a search operation. For example, if searching for common names is allowed as well as read permission for a person's room number, then the room number can be returned as part of the common name search, but the room number itself cannot be used as the subject of a search. Use this combination to prevent people from searching the directory to see who sits in a particular room. Compare Indicates whether the data may be used in comparison operations. The compare permission implies the ability to search, but actual directory information is not returned as a result of the search. Instead, a simple Boolean value is returned which indicates whether the compared values match. This is used to match userPassword attribute values during directory authentication. Self-write Used only for group management. This permission enables a user to add to or delete themselves from a group. Add Indicates whether child entries can be created. This permission enables a user to create child entries beneath the targeted entry. Delete Indicates whether an entry can be deleted. This permission enables a user to delete the targeted entry. Proxy Indicates that the user can use any other DN, except Directory Manager, to access the directory with the rights of this DN. 9.7.1.3. Bind Rules The bind rule usually indicates the bind DN subject to the permission. It can also specify bind attributes such as time of day or IP address. Bind rules easily express that the ACI applies only to a user's own entry. This allows users to update their own entries without running the risk of a user updating another user's entry. Bind rules indicate that the ACI is applicable in specific situations: Only if the bind operation is arriving from a specific IP address (IPv4 or IPv6) or DNS host name. This is often used to force all directory updates to occur from a given machine or network domain. If the person binds anonymously. Setting a permission for anonymous bind also means that the permission applies to anyone who binds to the directory as well. For anyone who successfully binds to the directory. This allows general access while preventing anonymous access. Only if the client has bound as the immediate parent of the entry. Only if the entry as which the person has bound meets a specific LDAP search criteria. The Directory Server provides several keywords to more easily express these kinds of access: Parent . If the bind DN is the immediate parent entry, then the bind rule is true. This means that specific permissions can be granted that allow a directory branch point to manage its immediate child entries. Self . If the bind DN is the same as the entry requesting access, then the bind rule is true. Specific permission can be granted to allow individuals to update their own entries. All . The bind rule is true for anyone who has successfully bound to the directory. Anyone . The bind rule is true for everyone. This keyword is used to allow or deny anonymous access. 9.7.2. Setting Permissions By default, all users are denied access rights of any kind, with the exception of the Directory Manager. Consequently, some ACIs must be set for the directory for users to be able to access the directory. For information about how to set ACIs in the directory, see the Red Hat Directory Server Administration Guide . 9.7.2.1. The Precedence Rule When a user attempts any kind of access to a directory entry, Directory Server examines the access control set in the directory. To determine access, Directory Server applies the precedence rule . This rule states that when two conflicting permissions exist, the permission that denies access always takes precedence over the permission that grants access. For example, if write permission is denied at the directory's root level, and that permission is applied to everyone accessing the directory, then no user can write to the directory regardless of any other permissions that may allow write access. To allow a specific user write permissions to the directory, the scope of the original deny-for-write has to be set so that it does not include that user. Then, there must be additional allow-for-write permission for the user in question. 9.7.2.2. Allowing or Denying Access Access to the directory tree can be explicitly allowed or denied, but be careful of explicitly denying access to the directory. Because of the precedence rule, if the directory finds rules explicitly forbidding access, the directory forbids access regardless of any conflicting permissions that may grant access. Limit the scope of allow access rules to include only the smallest possible subset of users or client applications. For example, permissions can be set that allow users to write to any attribute on their directory entry, but then deny all users except members of the Directory Administrators group the privilege of writing to the uid attribute. Alternatively, write two access rules that allow write access in the following ways: Create one rule that allows write privileges to every attribute except the uid attribute. This rule should apply to everyone. Create one rule that allows write privileges to the uid attribute. This rule should apply only to members of the Directory Administrators group. Providing only allow privileges avoids the need to set an explicit deny privilege. 9.7.2.3. When to Deny Access It is rarely necessary to set an explicit deny privilege, but there are a few circumstances where it is useful: There is a large directory tree with a complex ACL spread across it. For security reasons, it may be necessary to suddenly deny access to a particular user, group, or physical location. Rather than spending the time to carefully examine the existing ACL to understand how to appropriately restrict the allow permissions, temporarily set the explicit deny privilege until there is time to do the analysis. If the ACL has become this complex, then, in the long run, the deny ACI only adds to the administrative overhead. As soon as possible, rework the ACL to avoid the explicit deny privilege and then simplify the overall access control scheme. Access control should be based on a day of the week or an hour of the day. For example, all writing activities can be denied from Sunday at 11:00 p.m. (2300) to Monday at 1:00 a.m. (0100). From an administrative point of view, it may be easier to manage an ACI that explicitly restricts time-based access of this kind than to search through the directory for all the allow-for-write ACIs and restrict their scopes in this time frame. Privileges should be restricted when delegating directory administration authority to multiple people. To allow a person or group of people to manage some part of the directory tree, without allowing them to modify some aspect of the tree, use an explicit deny privilege. For example, to make sure that Mail Administrators do not allow write access to the common name attribute, then set an ACI that explicitly denies write access to the common name attribute. 9.7.2.4. Where to Place Access Control Rules Access control rules can be placed on any entry in the directory. Often, administrators place access control rules on entries with the object classes domainComponent , country , organization , organizationalUnit , inetOrgPerson , or group . Organize rules into groups as much as possible in order to simplify ACL administration. Rules generally apply to their target entry and to all of that entry's children. Consequently, it is best to place access control rules on root points in the directory or on directory branch points, rather than scatter them across individual leaf (such as person) entries. 9.7.2.5. Using Filtered Access Control Rules One of the more powerful features of the Directory Server ACI model is the ability to use LDAP search filters to set access control. Use LDAP search filters to set access to any directory entry that matches a defined set of criteria. For example, allow read access for any entry that contains an organizationalUnit attribute that is set to Marketing. Filtered access control rules allow predefined levels of access. Suppose the directory contains home address and telephone number information. Some people want to publish this information, while others want to be unlisted. There are several ways to address that: Create an attribute on every user's directory entry called publishHomeContactInfo . Set an access control rule that grants read access to the homePhone and homePostalAddress attributes only for entries whose publishHomeContactInfo attribute is set to true (meaning enabled). Use an LDAP search filter to express the target for this rule. Allow the directory users to change the value of their own publishHomeContactInfo attribute to either true or false . In this way, the directory user can decide whether this information is publicly available. For more information about using LDAP search filters and on using LDAP search filters with ACIs, see the Red Hat Directory Server Administration Guide . 9.7.3. Viewing ACIs: Get Effective Rights It can be necessary to view access controls set on an entry to grant fine-grained access control or for efficient entry management. Get effective rights is an extended ldapsearch which returns the access control permissions set on each attribute within an entry, and allows an LDAP client to determine what operations the server's access control configuration allows a user to perform. The access control information is divided into two groups of access: rights for an entry and rights for an attribute. "Rights for an entry" means the rights, such as modify or delete, that are limited to that specific entry. "Rights for an attribute" means the access right to every instance of that attribute throughout the directory. This kind of detailed access control may be necessary in the following types of situations: An administrator can use the get effective rights command for minute access control, such as allowing certain groups or users access to entries and restricting others. For example, members of the QA Managers group may have the right to search and read attributes such as title and salary , but only HR Group members have the rights to modify or delete them. A user can use the get effective rights option to determine what attributes they can view or modify on their personal entry. For example, a user should have access to attributes such as homePostalAddress and cn , but may only have read access to title and salary . An ldapsearch executed using the -E switch returns the access controls on a particular entry as part of the normal search results. The following search shows the rights that user Ted Morris has to his personal entry: In this example, Ted Morris has the right to add, view, delete, or rename the DN on his own entry, as shown by the results in entryLevelRights . He can read, search, compare, self-modify, or self-delete the location ( l ) attribute but only self-write and self-delete rights to his password, as shown in the attributeLevelRights result. By default, effective rights information is not returned for attributes in an entry that do not have a value or which do not exist in the entry. For example, if the userPassword value is removed, then a future effective rights search on the above entry would not return any effective rights for userPassword , even though self-write and self-delete rights could be allowed. Similarly, if the street attribute were added with read, compare, and search rights, then street: rsc would appear in the attributeLevelRights results. It is possible to return rights for attributes which are not normally included in the search results, like non-existent attributes or operational attributes. Using an asterisk ( * ) returns the rights for all possible attributes for an entry, including non-existent attributes. Using the plus sign ( + ) returns operational attributes for the entry, which are not normally returned in an ldapsearch asterisk (*). For example: The asterisk ( * ) and the plus sign ( + ) can be used together to return every attribute for the entry. 9.7.4. Using ACIs: Some Hints and Tricks Keep this tips in mind when implementing the security policy. They can help to lower the administrative burden of managing the directory security model and improve the directory's performance characteristics. Minimize the number of ACIs in the directory. Although the Directory Server can evaluate over 50,000 ACIs, it is difficult to manage a large number of ACI statements. A large number of ACIs makes it hard for human administrators to immediately determine the directory object available to particular clients. Directory Server minimizes the number of ACIs in the directory by using macros. Macros are placeholders that are used to represent a DN, or a portion of a DN, in an ACI. Use the macro to represent a DN in the target portion of the ACI or in the bind rule portion, or both. For more information on macro ACIs, see the "Managing Access Control" chapter in the Red Hat Directory Server Administration Guide . Balance allow and deny permissions. Although the default rule is to deny access to any user who has not been specifically granted access, it may be better to reduce the number of ACIs by using one ACI to allow access close to the root of the tree, and a small number of deny ACIs close to the leaf entries. This scenario can avoid the use of multiple allow ACIs close to the leaf entries. Identify the smallest set of attributes on any given ACI. When allowing or denying access to a subset of attributes on an object, determine whether the smallest list is the set of attributes that are allowed or the set of attributes that are denied. Then express the ACI so that it only requires managing the smallest list. For example, the person object class contains a large number of attributes. To allow a user to update only one or two of these attributes, write the ACI so that it allows write access for only those few attributes. However, to allow a user to update all but one or two attributes, create the ACI so that it allows write access for everything but a few named attributes. Use LDAP search filters cautiously. Search filters do not directly name the object for which you are managing access. Consequently their use can produce unexpected results. This is especially true as the directory becomes more complex. Before using search filters in ACIs, run an ldapsearch operation using the same filter to make clear what the results of the changes mean to the directory. Do not duplicate ACIs in differing parts of the directory tree. Guard against overlapping ACIs. For example, if there is an ACI at the directory root point that allows a group write access to the commonName and givenName attributes, and another ACI that allows the same group write access for only the commonName attribute, then consider reworking the ACIs so that only one control grants the write access for the group. As the directory grows more complex, the risk of accidentally overlapping ACIs quickly increases. By avoiding ACI overlap, security management becomes easier while potentially reducing the total number of ACIs contained in the directory. Name ACIs. While naming ACIs is optional, giving each ACI a short, meaningful name helps with managing the security model. Group ACIs as closely together as possible within the directory. Try to limit ACI placement to the directory root point and to major directory branch points. Grouping ACIs helps to manage the total list of ACIs, as well as helping keep the total number of ACIs in the directory to a minimum. Avoid using double negatives, such as deny write if the bind DN is not equal to cn=Joe . Although this syntax is perfectly acceptable for the server, it is confusing for a human administrator. 9.7.5. Applying ACIs to the Root DN (Directory Manager) Normally, access control rules do not apply to the Directory Manager user. The Directory Manager is defined in the dse.ldif file, not in the regular user database, and so ACI targets do not include that user. It also makes sense from a maintenance perspective. The Directory Manager requires a high level of access in order to perform maintenance tasks and to respond to incidents. Still, because of the power of the Directory Manager user, a certain level of access control may be advisable to prevent unauthorized access or attacks from being performed as the root user. The RootDN Access Control Plug-in sets certain access control rules specific to the Directory Manager user: Time-based access controls for time ranges, such as 8a.m. to 5p.m. (0800 to 1700). Day-of-week access controls, so access is only allowed on explicitly defined days IP address rules, where only specified IP addresses, domains, or subnets are explicitly allowed or denied Host access rules, where only specified host names, domain names, or subdomains are explicitly allowed or denied As with other access control rules, deny rules supercede allow rules. Important Make sure that the Directory Manager always has the approproate level of access allowed. The Directory Manager may need to perform maintenance operations in off-hours (when user load is light) or to respond to failures. In that case, setting stringent time or day-based access control rules could prevent the Directory Manager from being able to adequately manage the directory. Root DN access control rules are disabled by default. The RootDN Access Control Plug-in must be enabled, and then the appropriate access control rules can be set. Note There is only one access control rule set for the Directory Manager, in the plug-in entry, and it applies to all access to the entire directory. | [
"target (permission bind_rule)(permission bind_rule)",
"ldapsearch -x -p 389 -h server.example.com -D \"uid=tmorris,ou=people,dc=example,dc=com\" -W -b \"uid=tmorris,ou=people,dc=example,dc=com\" -E !1.3.6.1.4.1.42.2.27.9.5.2:dn:uid=tmorris,ou=people,dc=example,dc=com \"(objectClass=*)\" version: 1 dn: uid=tmorris,ou=People,dc=example,dc=com givenName: Ted sn: Morris ou: Accounting ou: People l: Santa Clara manager: uid=dmiller,ou=People,dc=example,dc=com roomNumber: 4117 mail: [email protected] facsimileTelephoneNumber: +1 408 555 5409 objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uid: tmorris cn: Ted Morris userPassword: {SSHA}bz0uCmHZM5b357zwrCUCJs1IOHtMD6yqPyhxBA== entryLevelRights: vadn attributeLevelRights: givenName:rsc, sn:rsc, ou:rsc, l:rscow, manager:rsc, roomNumber:rscwo, mail:rscwo, facsimileTelephoneNumber:rscwo, objectClass:rsc, uid:rsc, cn:rsc, userPassword:wo",
"ldapsearch -x -E !1.3.6.1.4.1.42.2.27.9.5.2:dn:uid=scarter,ou=people,dc=example,dc=com \"(objectclass=*)\" \"*\"",
"ldapsearch -x -E !1.3.6.1.4.1.42.2.27.9.5.2:dn:uid=scarter,ou=people,dc=example,dc=com \"(objectclass=*)\" \"+\""
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory-Designing_Access_Control |
Chapter 12. Preparing for a director-deployed Ceph Storage upgrade | Chapter 12. Preparing for a director-deployed Ceph Storage upgrade If your deployment uses a director-deployed Red Hat Ceph Storage cluster, you must complete the procedures included in this section. Important RHOSP 16.2 is supported on RHEL 8.4. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations . Note If you are upgrading with external Ceph deployments, you must skip the procedures included in this section and continue to Chapter 13, Preparing for upgrading with external Ceph deployments . The upgrade process maintains the use of Red Hat Ceph Storage 3 containerized services during the upgrade to Red Hat OpenStack Platform 16.2. After you complete the Red Hat OpenStack Platform 16.2 upgrade, you upgrade the Ceph Storage services to Red Hat Ceph Storage 4. You cannot provision new shares with the Shared File Systems service (manila) until you complete both the Red Hat OpenStack Platform 16.2 upgrade and the Ceph Storage services upgrade to Red Hat Ceph Storage 4. 12.1. Understanding the Ceph Storage node upgrade process at a high level The director-deployed Ceph Storage nodes continue to use Red Hat Ceph Storage 3 containers during the overcloud upgrade process. To understand how Ceph Storage nodes and services are impacted during the upgrade process, read the following summaries for each aspect of the Ceph Storage upgrade process. ceph-ansible ceph-ansible is a collection of roles and playbooks that director uses to install, maintain, and upgrade Ceph Storage services. When you upgraded the undercloud, you ran certain commands that ensured ceph-ansible remained at the latest version 3 collection after the transition to Red Hat Enterprise Linux 8.4. Version 3 of ceph-ansible keeps the containerized Ceph Storage services on version 3 through the duration of the overcloud upgrade. After you complete the upgrade, you enable the Red Hat Ceph Storage update the Red Hat Ceph Storage Tools 4 for RHEL 8 repository and update ceph-ansible to version 4. Migration to Podman During the overcloud upgrade, you must run the openstack overcloud external-upgrade run --tags ceph_systemd command to change the systemd services that control Ceph Storage containerized services to use Podman instead of Docker. You run this command before performing the operating system upgrade on any node that contains Ceph Storage containerized services. After you change the systemd services to use Podman on a node, you perform the operating system upgrade and the OpenStack Platform service upgrade. The Ceph Storage containers on that node will run again after the OpenStack Platform service upgrade. Ceph Storage operating system upgrade You follow the same workflow on Ceph Storage nodes as you do on overcloud nodes in general. When you run the openstack overcloud upgrade run --tags system_upgrade command against a Ceph Storage node, director runs Leapp on Ceph Storage node and upgrades the operating system to Red Hat Enterprise Linux 8.4. You then run the untagged openstack overcloud upgrade run command against the Ceph Storage node, which runs the following containers: Red Hat Ceph Storage 3 containerized services Red Hat OpenStack Platform 16.2 containerized services Upgrading to Red Hat Ceph Storage 4 After you complete the Leapp upgrade and Red Hat OpenStack Platform upgrade, the Ceph Storage containerized services will still use version 3 containers. At this point, you must upgrade ceph-ansible to version 4 and then run the openstack overcloud external-upgrade run --tags ceph command that performs an upgrade of all Red Hat Ceph Storage services on all nodes to version 4. Summary of the Ceph Storage workflow The following list is a high level workflow for the Red Hat Ceph Storage upgrade. This workflow is integrated into the general Red Hat OpenStack Platform workflow and you run upgrade framework commands on the undercloud to perform the operations in this workflow. Upgrade the undercloud but retain version 3 of ceph-ansible Start the overcloud upgrade Perform the following tasks for each node that hosts Ceph Storage containerized services: Migrate the Ceph Storage containerized services to Podman Upgrade the operating system Upgrade the OpenStack Platform services, which relaunches Ceph Storage version 3 containerized services Complete the overcloud upgrade Upgrade ceph-ansible to version 4 on the undercloud Upgrade to Red Hat Ceph Storage 4 on the overcloud Note This list does not capture all steps in the complete Red Hat OpenStack Platform 16.2 upgrade process but focuses only on the aspects relevant to Red Hat Ceph Storage to describe what occurs to Ceph Storage services during the upgrade process. 12.2. Checking your ceph-ansible version During the undercloud upgrade, you retained the Ceph Storage 3 version of the ceph-ansible package. This helps maintain the compatibility of the Ceph Storage 3 containers on your Ceph Storage nodes. Verify that this package remains on your undercloud. Procedure Log in to the undercloud as the stack user. Run the dnf command to check the version of the ceph-ansible package: The command output shows version 3 of the ceph-ansible package: Important If the ceph-ansible package is missing or not a version 3 package, download the latest version 3 package from the Red Hat Package Browser and manually install the package on your undercloud. Note that the ceph-ansible version 3 package is only available from Red Hat Enterprise Linux 7 repositories and is not available in Red Hat Enterprise Linux 8 repositories. ceph-ansible version 3 is not supported on Red Hat Enterprise Linux 8 outside the context of the Red Hat OpenStack Platform framework for upgrades. 12.3. Setting the ceph-ansible repository The Red Hat OpenStack Platform 16.2 validation framework tests that ceph-ansible is installed correctly before director upgrades the overcloud to Red Hat Ceph Storage 4. The framework uses the CephAnsibleRepo parameter to check that you installed ceph-ansible from the correct repository. Director disables the test after you run the openstack overcloud upgrade prepare command and this test remains disabled through the duration of the Red Hat OpenStack Platform 16.2 overcloud upgrade. Director re-enables this test after running the openstack overcloud upgrade converge command. However, to prepare for this validation, you must set the CephAnsibleRepo parameter to the Red Hat Ceph Storage Tools 4 for RHEL 8 repository. Procedure Log in to the undercloud as the stack user. Edit the environment file that contains your overcloud Ceph Storage configuration. This file is usually named ceph-config.yaml and you can find it in your templates directory: Add the CephAnsibleRepo parameter to the parameter_defaults section: CephAnsibleRepo sets the repository that includes ceph-ansible . The validation framework uses this parameter to check that you have installed ceph-ansible on the undercloud. Save the ceph-config.yaml file. 12.4. Checking Ceph cluster status before an upgrade Before you can proceed with the overcloud upgrade, you must verify that the Ceph cluster is functioning as expected. Procedure Log in to the node that is running the ceph-mon service. This node is usually a Controller node or a standalone Ceph Monitor node. Enter the following command to view the status of the Ceph cluster: Confirm that the health status of the cluster is HEALTH_OK and that all of the OSDs are up . | [
"sudo dnf info ceph-ansible",
"Installed Packages Name : ceph-ansible Version : 3.xx.xx.xx",
"vi /home/stack/templates/ceph-config.yaml",
"parameter_defaults: CephAnsibleRepo: rhceph-4-tools-for-rhel-8-x86_64-rpms",
"docker exec ceph-mon-USDHOSTNAME ceph -s"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/preparing-for-a-director-deployed-ceph-storage-upgrade_preparing-overcloud |
17.8. Managing DNS Settings | 17.8. Managing DNS Settings The DNS tab allows you to configure the system's hostname, domain, name servers, and search domain. Name servers are used to look up other hosts on the network. If the DNS server names are retrieved from DHCP or PPPoE (or retrieved from the ISP), do not add primary, secondary, or tertiary DNS servers. If the hostname is retrieved dynamically from DHCP or PPPoE (or retrieved from the ISP), do not change it. Figure 17.14. DNS Configuration Note The name servers section does not configure the system to be a name server. Instead, it configures which name servers to use when resolving IP addresses to hostnames and vice-versa. Warning If the hostname is changed and system-config-network is started on the local host, you may not be able to start another X11 application. As such, you may have to re-login to a new desktop session. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-network-config-dns |
Chapter 1. Template APIs | Chapter 1. Template APIs 1.1. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. PodTemplate [v1] Description PodTemplate describes a template for creating copies of a predefined pod. Type object 1.3. Template [template.openshift.io/v1] Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/template_apis/template-apis |
Chapter 4. Basic configuration options of Shenandoah garbage collector | Chapter 4. Basic configuration options of Shenandoah garbage collector Shenandoah garbage collector (GC) has the following basic configuration options: -Xlog:gc Print the individual GC timing. -Xlog:gc+ergo Print the heuristics decisions, which might shed light on outliers, if any. -Xlog:gc+stats Print the summary table on Shenandoah internal timings at the end of the run. It is best to run this with logging enabled. This summary table conveys important information about GC performance. Heuristics logs are useful to figure out GC outliers. -XX:+AlwaysPreTouch Commit heap pages into memory and helps to reduce latency hiccups. -Xms and -Xmx Making the heap non-resizeable with -Xms = -Xmx reduces difficulties with heap management. Along with AlwaysPreTouch , the -Xms = -Xmx commit all memory on startup, which avoids difficulties when memory is finally used. -Xms also defines the low boundary for memory uncommit, so with -Xms = -Xmx all memory stays committed. If you want to configure Shenandoah for a lower footprint, then setting lower -Xms is recommended. You need to decide how low to set it to balance the commit/uncommit overhead versus memory footprint. In many cases, you can set -Xms arbitrarily low. -XX:+UseLargePages Enables hugetlbfs Linux support. -XX:+UseTransparentHugePages Enables huge pages transparently. With transparent huge pages, it is recommended to set /sys/kernel/mm/transparent_hugepage/enabled and /sys/kernel/mm/transparent_hugepage/defrag to madvise . When running with AlwaysPreTouch , it will also pay the defrag tool costs upfront at startup. -XX:+UseNUMA While Shenandoah does not support NUMA explicitly yet, it is a good idea to enable NUMA interleaving on multi-socket hosts. Coupled with AlwaysPreTouch , it provides better performance than the default out-of-the-box configuration. -XX:-UseBiasedLocking There is a tradeoff between uncontended (biased) locking throughput, and the safepoints JVM does to enable and disable them. For latency-oriented workloads, turn biased locking off. -XX:+DisableExplicitGC Invoking System.gc() from user code forces Shenandoah to perform additional GC cycle. It usually does not harm, as -XX:+ExplicitGCInvokesConcurrent gets enabled by default, which means the concurrent GC cycle would be invoked, not the STW Full GC. Revised on 2024-05-10 09:08:44 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk/shenandoah-gc-basic-configuration |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.9 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | [
"status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed",
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i",
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>",
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc sa get-token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe cluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"tar -xvzf must-gather/metrics/prom_data.tar.gz",
"make prometheus-run",
"Started Prometheus on http://localhost:9090",
"make prometheus-cleanup",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/migration_toolkit_for_containers/index |
Chapter 7. Configuring Soft-iWARP | Chapter 7. Configuring Soft-iWARP Remote Direct Memory Access (RDMA) uses several libraries and protocols over an Ethernet such as iWARP, Soft-iWARP for performance improvement and aided programming interface. Important The Soft-iWARP feature is deprecated and will be removed in RHEL 10. Soft-iWARP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 7.1. Overview of iWARP and Soft-iWARP Remote direct memory access (RDMA) uses the iWARP over Ethernet for converged and low latency data transmission over TCP. By using standard Ethernet switches and the TCP/IP stack, iWARP routes traffic across the IP subnets to utilize the existing infrastructure efficiently. In Red Hat Enterprise Linux, multiple providers implement iWARP for their hardware network interface cards. For example, cxgb4 , irdma , qedr , and so on. Soft-iWARP (siw) is a software-based iWARP kernel driver and user library for Linux. It is a software-based RDMA device that provides a programming interface to RDMA hardware when attached to network interface cards. It provides an easy way to test and validate the RDMA environment. 7.2. Configuring Soft-iWARP Soft-iWARP (siw) implements the iWARP Remote direct memory access (RDMA) transport over the Linux TCP/IP network stack. It enables a system with a standard Ethernet adapter to interoperate with an iWARP adapter or with another system running the Soft-iWARP driver or a host with the hardware that supports iWARP. Important The Soft-iWARP feature is deprecated and will be removed in RHEL 10. The Soft-iWARP feature is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. To configure Soft-iWARP, you can use this procedure in a script to run automatically when the system boots. Prerequisites An Ethernet adapter is installed Procedure Install the iproute , libibverbs , libibverbs-utils , and infiniband-diags packages: Display the RDMA links: Load the siw kernel module: Add a new siw device named siw0 that uses the enp0s1 interface: Verification View the state of all RDMA links: List the available RDMA devices: You can use the ibv_devinfo utility to display a detailed status: | [
"dnf install iproute libibverbs libibverbs-utils infiniband-diags",
"rdma link show",
"modprobe siw",
"rdma link add siw0 type siw netdev enp0s1",
"rdma link show link siw0/1 state ACTIVE physical_state LINK_UP netdev enp0s1",
"ibv_devices device node GUID ------ ---------------- siw0 0250b6fffea19d61",
"ibv_devinfo siw0 hca_id: siw0 transport: iWARP (1) fw_ver: 0.0.0 node_guid: 0250:b6ff:fea1:9d61 sys_image_guid: 0250:b6ff:fea1:9d61 vendor_id: 0x626d74 vendor_part_id: 1 hw_ver: 0x0 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 1024 (3) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_infiniband_and_rdma_networks/configuring-soft-iwarp_configuring-infiniband-and-rdma-networks |
6.6. Additional Resources | 6.6. Additional Resources For more information about users and groups, and tools to manage them, refer to the following resources. 6.6.1. Installed Documentation Related man pages - There are a number of man pages for the various applications and configuration files involved with managing users and groups. Some of the more important man pages have been listed here: User and Group Administrative Applications man chage - A command to modify password aging policies and account expiration. man gpasswd - A command to administer the /etc/group file. man groupadd - A command to add groups. man grpck - A command to verify the /etc/group file. man groupdel - A command to remove groups. man groupmod - A command to modify group membership. man pwck - A command to verify the /etc/passwd and /etc/shadow files. man pwconv - A tool to convert standard passwords to shadow passwords. man pwunconv - A tool to convert shadow passwords to standard passwords. man useradd - A command to add users. man userdel - A command to remove users. man usermod - A command to modify users. Configuration Files man 5 group - The file containing group information for the system. man 5 passwd - The file containing user information for the system. man 5 shadow - The file containing passwords and account expiration information for the system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-users-groups-additional-resources |
Chapter 16. Controlling power management transitions | Chapter 16. Controlling power management transitions You can control power management transitions to improve latency. Prerequisites You have root permissions on the system. 16.1. Power saving states Modern processors actively transition to higher power saving states (C-states) from lower states. Unfortunately, transitioning from a high power saving state back to a running state can consume more time than is optimal for a real-time application. To prevent these transitions, an application can use the Power Management Quality of Service (PM QoS) interface. With the PM QoS interface, the system can emulate the behavior of the idle=poll and processor.max_cstate=1 parameters, but with a more fine-grained control of power saving states. idle=poll prevents the processor from entering the idle state. processor.max_cstate=1 prevents the processor from entering deeper C-states (energy-saving modes). When an application holds the /dev/cpu_dma_latency file open, the PM QoS interface prevents the processor from entering deep sleep states, which cause unexpected latencies when they are being exited. When the file is closed, the system returns to a power-saving state. 16.2. Configuring power management states You can control power management transitions by configuring power management states with one of the following ways: Write a value to the /dev/cpu_dma_latency file to change the maximum response time for processes in microseconds and hold the file descriptor open until low latency is required. Reference the /dev/cpu_dma_latency file in an application or a script. Prerequisites You have administrator privileges. Procedure Specify latency tolerance by writing a 32-bit number that represents a maximum response time in microseconds in /dev/cpu_dma_latency and keep the file descriptor open through the low-latency operation. A value of 0 disables C-state completely. For example: Note The Power Management Quality of Service interface ( pm_qos ) interface is only active while it has an open file descriptor. Therefore, any script or program you use to access /dev/cpu_dma_latency must hold the file open until power-state transitions are allowed. | [
"import os import signal import sys if not os.path.exists('/dev/cpu_dma_latency'): print(\"no PM QOS interface on this system!\") sys.exit(1) try: fd = os.open('/dev/cpu_dma_latency', os.O_WRONLY) os.write(fd, b'\\0\\0\\0\\0') print(\"Press ^C to close /dev/cpu_dma_latency and exit\") signal.pause() except KeyboardInterrupt: print(\"closing /dev/cpu_dma_latency\") os.close(fd) sys.exit(0)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_controlling-power-management-transitions_optimizing-rhel9-for-real-time-for-low-latency-operation |
16.12. Assign Storage Devices | 16.12. Assign Storage Devices If you selected more than one storage device on the storage devices selection screen (refer to Section 16.8, "Storage Devices" ), anaconda asks you to select which of these devices should be available for installation of the operating system, and which should only be attached to the file system for data storage. If you selected only one storage device, anaconda does not present you with this screen. During installation, the devices that you identify here as being for data storage only are mounted as part of the file system, but are not partitioned or formatted. Figure 16.33. Assign storage devices The screen is split into two panes. The left pane contains a list of devices to be used for data storage only. The right pane contains a list of devices that are to be available for installation of the operating system. Each list contains information about the devices to help you to identify them. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Move a device from one list to the other by clicking on the device, then clicking either the button labeled with a left-pointing arrow to move it to the list of data storage devices or the button labeled with a right-pointing arrow to move it to the list of devices available for installation of the operating system. The list of devices available as installation targets also includes a radio button beside each device. Use this radio button to specify the device that you want to use as the boot device for the system. Important If any storage device contains a boot loader that will chain load the Red Hat Enterprise Linux boot loader, include that storage device among the Install Target Devices . Storage devices that you identify as Install Target Devices remain visible to anaconda during boot loader configuration. Storage devices that you identify as Install Target Devices on this screen are not automatically erased by the installation process unless you selected the Use All Space option on the partitioning screen (refer to Section 16.15, "Disk Partitioning Setup" ). When you have finished identifying devices to be used for installation, click to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/assign_storage_devices-ppc |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/proc_providing-feedback-on-red-hat-documentation_configuring-authentication-and-authorization-in-rhel |
Chapter 24. Configuring resources to remain stopped on clean node shutdown | Chapter 24. Configuring resources to remain stopped on clean node shutdown When a cluster node shuts down, Pacemaker's default response is to stop all resources running on that node and recover them elsewhere, even if the shutdown is a clean shutdown. You can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. 24.1. Cluster properties to configure resources to remain stopped on clean node shutdown The ability to prevent resources from failing over on a clean node shutdown is implemented by means of the following cluster properties. shutdown-lock When this cluster property is set to the default value of false , the cluster will recover resources that are active on nodes being cleanly shut down. When this property is set to true , resources that are active on the nodes being cleanly shut down are unable to start elsewhere until they start on the node again after it rejoins the cluster. The shutdown-lock property will work for either cluster nodes or remote nodes, but not guest nodes. If shutdown-lock is set to true , you can remove the lock on one cluster resource when a node is down so that the resource can start elsewhere by performing a manual refresh on the node with the following command. pcs resource refresh resource node= nodename Note that once the resources are unlocked, the cluster is free to move the resources elsewhere. You can control the likelihood of this occurring by using stickiness values or location preferences for the resource. Note A manual refresh will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. You can then perform a manual refresh on the remote node. shutdown-lock-limit When this cluster property is set to a time other than the default value of 0, resources will be available for recovery on other nodes if the node does not rejoin within the specified time since the shutdown was initiated. Note The shutdown-lock-limit property will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. After you run these commands, the resources that had been running on the remote node will be available for recovery on other nodes when the amount of time specified as the shutdown-lock-limit has passed. 24.2. Setting the shutdown-lock cluster property The following example sets the shutdown-lock cluster property to true in an example cluster and shows the effect this has when the node is shut down and started again. This example cluster consists of three nodes: z1.example.com , z2.example.com , and z3.example.com . Procedure Set the shutdown-lock property to to true and verify its value. In this example the shutdown-lock-limit property maintains its default value of 0. Check the status of the cluster. In this example, resources third and fifth are running on z1.example.com . Shut down z1.example.com , which will stop the resources that are running on that node. Running the pcs status command shows that node z1.example.com is offline and that the resources that had been running on z1.example.com are LOCKED while the node is down. Start cluster services again on z1.example.com so that it rejoins the cluster. Locked resources should get started on that node, although once they start they will not not necessarily remain on the same node. In this example, resouces third and fifth are recovered on node z1.example.com . | [
"pcs property set shutdown-lock=true pcs property list --all | grep shutdown-lock shutdown-lock: true shutdown-lock-limit: 0",
"pcs status Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z2.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com",
"pcs cluster stop z1.example.com Stopping Cluster (pacemaker) Stopping Cluster (corosync)",
"pcs status Node List: * Online: [ z2.example.com z3.example.com ] * OFFLINE: [ z1.example.com ] Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED) * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED)",
"pcs cluster start z1.example.com Starting Cluster",
"pcs status Node List: * Online: [ z1.example.com z2.example.com z3.example.com ] Full List of Resources: .. * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-resources-to-remain-stopped-configuring-and-managing-high-availability-clusters |
12.25. OData Translator | 12.25. OData Translator 12.25.1. OData Translator The OData translator exposes the OData V2 and V3 data sources. This translator implements a simple connection for web services in the same way as the Web Services translator. The OData translator is implemented by the org.teiid.translator.odata.ODataExecutionFactory class and known by the translator type name odata . Note Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores. OData is being used to expose and access information from a variety of sources including, but not limited to, relational databases, file systems, content management systems and traditional Web sites. Using this specification from OASIS group, and with the help from framework OData4J , JBoss Data Virtualization maps OData entities into relational schema. JBoss Data Virtualization supports reading of CSDL (Conceptual Schema Definition Language) from the OData endpoint provided and converts the OData schema into relational schema. The below table shows the mapping selections in OData Translator from CSDL document. OData Mapped to Relational Entity EntitySet Table FunctionImport Procedure AssociationSet Foreign Keys on the Table* ComplexType ignored** * A Many to Many association will result in a link table that can not be selected from, but can be used for join purposes. ** When used in Functions, an implicit table is exposed. When used to define a embedded table, all the columns will be in-lined. All CRUD operations will be appropriately mapped to the resulting entity based on the SQL submitted to the OData translator. Note The resource adapter for this translator is provided by configuring the webservice data source in the JBoss EAP instance. See the Red Hat JBoss Data Virtualization Administration and Configuration Guide for more configuration information. Using this specification from OASIS group, with the help from the Olingo framework, Teiid maps OData V4 CSDL (Conceptual Schema Definition Language) document from the OData endpoint provided and converts the OData metadata into Teiid's relational schema. The below table shows the mapping selections in OData V4 Translator from CSDL document 12.25.2. OData Translator: Execution Properties Table 12.21. Execution Properties Name Description Default DatabaseTimeZone The time zone of the database. Used when fetching date, time, or timestamp values The system default time zone 12.25.3. OData Translator: Importer Properties Table 12.22. Importer Properties Name Description Default schemaNamespace Namespace of the schema to import null entityContainer Entity Container Name to import default container Example importer settings to only import tables and views from NetflixCatalog: 12.25.4. OData Translator: Usage Usage of an OData source is similar to a JDBC translator. The metadata import is supported through the translator, once the metadata is imported from source system and exposed in relational terms, then this source can be queried as if the EntitySets and Function Imports were local to the JBoss Data Virtualization system. Table 12.23. Execution Properties Property Description Default DatabaseTimeZone The time zone of the database. Used when fetchings date, time, or timestamp values The system default time zone SupportsOdataCount Supports USDcount True SupportsOdataFilter Supports USDfilter True SupportsOdataOrderBy Supports USDorderby true SupportsOdataSkip Supports USDskip True SupportsOdataTop Supports USDtop True Table 12.24. Importer Properties Property Description Default schemaNamespace Namespace of the schema to import Null entityContainer Entity Container Name to import Default container Here are some importer settings to import tables and views only from NetflixCatalog: Note Sometimes it's possible that the odata server you are querying does not fully implement all OData specification features. If your OData implementation does not support a certain feature, then turn off the corresponding capability using "execution Properties", so that Teiid will not pushdown invalid queries to the translator. For example, to turn off USDfilter you add following to your vdb.xml then use "odata-override" as the translator name on your source model: Note Native or direct query execution is not supported through OData translator. However, user can use Web Services Translator's invokehttp method directly to issue a Rest based call and parse results using SQLXML. Note Teiid can not only consume OData based data sources, but it can expose any data source as an Odata based webservice. For more information see OData Support. | [
"<property name=\"importer.schemaNamespace\" value=\"System.Data.Objects\"/> <property name=\"importer.schemaPattern\" value=\"NetflixCatalog\"/>",
"<property name=\"importer.schemaNamespace\" value=\"System.Data.Objects\"/> <property name=\"importer.schemaPattern\" value=\"NetflixCatalog\"/>",
"<translator name=\"odata-override\" type=\"odata\"> <property name=\"SupportsOdataFilter\" value=\"false\"/> </translator>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-odata_translator |
11.3. Special Considerations | 11.3. Special Considerations This section enumerates several issues and factors to consider for specific storage configurations. Separate Partitions for /home, /opt, /usr/local If it is likely that you will upgrade your system in the future, place /home , /opt , and /usr/local on a separate device. This will allow you to reformat the devices/file systems containing the operating system while preserving your user and application data. DASD and zFCP Devices on IBM System Z On the IBM System Z platform, DASD and zFCP devices are configured via the Channel Command Word (CCW) mechanism. CCW paths must be explicitly added to the system and then brought online. For DASD devices, this is simply means listing the device numbers (or device number ranges) as the DASD= parameter at the boot command line or in a CMS configuration file. For zFCP devices, you must list the device number, logical unit number (LUN), and world wide port name (WWPN). Once the zFCP device is initialized, it is mapped to a CCW path. The FCP_x= lines on the boot command line (or in a CMS configuration file) allow you to specify this information for the installer. Encrypting Block Devices Using LUKS Formatting a block device for encryption using LUKS/ dm-crypt will destroy any existing formatting on that device. As such, you should decide which devices to encrypt (if any) before the new system's storage configuration is activated as part of the installation process. Stale BIOS RAID Metadata Moving a disk from a system configured for firmware RAID without removing the RAID metadata from the disk can prevent Anaconda from correctly detecting the disk. Warning Removing/deleting RAID metadata from disk could potentially destroy any stored data. Red Hat recommends that you back up your data before proceeding. To delete RAID metadata from the disk, use the following command: dmraid -r -E / device / For more information about managing RAID devices, refer to man dmraid and Chapter 17, Redundant Array of Independent Disks (RAID) . iSCSI Detection and Configuration For plug and play detection of iSCSI drives, configure them in the firmware of an iBFT boot-capable network interface card (NIC). CHAP authentication of iSCSI targets is supported during installation. However, iSNS discovery is not supported during installation. FCoE Detection and Configuration For plug and play detection of fibre-channel over ethernet (FCoE) drives, configure them in the firmware of an EDD boot-capable NIC. DASD Direct-access storage devices (DASD) cannot be added/configured during installation. Such devices are specified in the CMS configuration file. Block Devices with DIF/DIX Enabled DIF/DIX is a hardware checksum feature provided by certain SCSI host bus adapters and block devices. When DIF/DIX is enabled, errors will occur if the block device is used as a general-purpose block device. Buffered I/O or mmap(2) -based I/O will not work reliably, as there are no interlocks in the buffered write path to prevent buffered data from being overwritten after the DIF/DIX checksum has been calculated. This will cause the I/O to later fail with a checksum error. This problem is common to all block device (or file system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by overwrites. As such, block devices with DIF/DIX enabled should only be used with applications that use O_DIRECT . Such applications should use the raw block device. Alternatively, it is also safe to use the XFS file system on a DIF/DIX enabled block device, as long as only O_DIRECT I/O is issued through the file system. XFS is the only file system that does not fall back to buffered I/O when doing certain allocation operations. The responsibility for ensuring that the I/O data does not change after the DIF/DIX checksum has been computed always lies with the application, so only applications designed for use with O_DIRECT I/O and DIF/DIX hardware should use DIF/DIX. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/installcfg-special |
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform | Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform Red Hat OpenShift Data Foundation 4.18 Instructions on deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/index |
Updating clusters | Updating clusters OpenShift Container Platform 4.16 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes",
"oc get apirequestcounts",
"NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 0 3 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io 1.29 0 1",
"oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'",
"1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io",
"oc get apirequestcounts <resource>.<version>.<group> -o yaml",
"oc get apirequestcounts flowschemas.v1beta2.flowcontrol.apiserver.k8s.io -o yaml",
"oc get apirequestcounts flowschemas.v1beta2.flowcontrol.apiserver.k8s.io -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT",
"VERBS USERNAME USERAGENT create system:admin oc/4.13.0 (linux/amd64) list get system:serviceaccount:myns:default oc/4.16.0 (linux/amd64) watch system:serviceaccount:myns:webhook webhook/v1.0.0 (linux/amd64)",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.15-kube-1.29-api-removals-in-4.16\":\"true\"}}' --type=merge",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc adm upgrade",
"Recommended updates: VERSION IMAGE 4.16.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"RELEASE_IMAGE=<update_pull_spec>",
"quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>",
"RUN depmod -b /opt USD{KERNEL_VERSION}",
"quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863",
"apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.16",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.29.4 ip-10-0-170-223.ec2.internal Ready master 82m v1.29.4 ip-10-0-179-95.ec2.internal Ready worker 70m v1.29.4 ip-10-0-182-134.ec2.internal Ready worker 70m v1.29.4 ip-10-0-211-16.ec2.internal Ready master 82m v1.29.4 ip-10-0-250-100.ec2.internal Ready worker 69m v1.29.4",
"export OC_ENABLE_CMD_UPGRADE_STATUS=true",
"oc adm upgrade status",
"= Control Plane = Assessment: Progressing Target Version: 4.14.1 (from 4.14.0) Completion: 97% Duration: 54m Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Upgrade = = Worker Pool = Worker Pool: worker Assessment: Progressing Completion: 0% Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Pool = Worker Pool: infra Assessment: Progressing Completion: 0% Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Node NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m = Update Health = SINCE LEVEL IMPACT MESSAGE 14m4s Info None Update is proceeding well",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms --enable=rhocp-4.16-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.15-for-rhel-8-x86_64-rpms --enable=rhocp-4.16-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.29.4 mycluster-control-plane-1 Ready master 145m v1.29.4 mycluster-control-plane-2 Ready master 145m v1.29.4 mycluster-rhel8-0 Ready worker 98m v1.29.4 mycluster-rhel8-1 Ready worker 98m v1.29.4 mycluster-rhel8-2 Ready worker 98m v1.29.4 mycluster-rhel8-3 Ready worker 98m v1.29.4",
"yum update",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"export OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1",
"oc create -f <filename>.yaml",
"oc create -f update-service-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service",
"oc -n openshift-update-service create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"",
"oc create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-subscription.yaml",
"oc -n openshift-update-service get clusterserviceversions",
"NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded",
"FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]",
"podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest",
"podman push registry.example.com/openshift/graph-data:latest",
"NAMESPACE=openshift-update-service",
"NAME=service",
"RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images",
"GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest",
"oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF",
"while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done",
"while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done",
"NAMESPACE=openshift-update-service",
"NAME=service",
"POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"",
"PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"",
"oc patch clusterversion version -p USDPATCH --type merge",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}",
"sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d",
"oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>",
"skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.29.4 ip-10-0-138-148.ec2.internal Ready master 11m v1.29.4 ip-10-0-139-122.ec2.internal Ready master 11m v1.29.4 ip-10-0-147-35.ec2.internal Ready worker 7m v1.29.4 ip-10-0-153-12.ec2.internal Ready worker 7m v1.29.4 ip-10-0-154-10.ec2.internal Ready master 11m v1.29.4",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml",
"oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry",
"oc apply -f imageContentSourcePolicy.yaml",
"oc get ImageContentSourcePolicy -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}",
"oc get updateservice -n openshift-update-service",
"NAME AGE service 6s",
"oc delete updateservice service -n openshift-update-service",
"updateservice.updateservice.operator.openshift.io \"service\" deleted",
"oc project openshift-update-service",
"Now using project \"openshift-update-service\" on server \"https://example.com:6443\".",
"oc get operatorgroup",
"NAME AGE openshift-update-service-fprx2 4m41s",
"oc delete operatorgroup openshift-update-service-fprx2",
"operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted",
"oc get subscription",
"NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1",
"oc get subscription update-service-operator -o yaml | grep \" currentCSV\"",
"currentCSV: update-service-operator.v0.0.1",
"oc delete subscription update-service-operator",
"subscription.operators.coreos.com \"update-service-operator\" deleted",
"oc delete clusterserviceversion update-service-operator.v0.0.1",
"clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.29.4 control-plane-node-1 Ready master 75m v1.29.4 control-plane-node-2 Ready master 75m v1.29.4",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.29.4 compute-node-1 Ready worker 30m v1.29.4 compute-node-2 Ready worker 30m v1.29.4",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml",
"oc describe clusterversions/version",
"Desired: Channels: candidate-4.13 candidate-4.14 fast-4.13 fast-4.14 stable-4.13 Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 URL: https://access.redhat.com/errata/RHSA-2023:6130 Version: 4.13.19 History: Completion Time: 2023-11-07T20:26:04Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 Started Time: 2023-11-07T19:11:36Z State: Completed Verified: true Version: 4.13.19 Completion Time: 2023-10-04T18:53:29Z Image: quay.io/openshift-release-dev/ocp-release@sha256:eac141144d2ecd6cf27d24efe9209358ba516da22becc5f0abc199d25a9cfcec Started Time: 2023-10-04T17:26:31Z State: Completed Verified: true Version: 4.13.13 Completion Time: 2023-09-26T14:21:43Z Image: quay.io/openshift-release-dev/ocp-release@sha256:371328736411972e9640a9b24a07be0af16880863e1c1ab8b013f9984b4ef727 Started Time: 2023-09-26T14:02:33Z State: Completed Verified: false Version: 4.13.12 Observed Generation: 4 Version Hash: CMLl3sLq-EA= Events: <none>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/updating_clusters/index |
Chapter 2. Ceph Dashboard installation and access | Chapter 2. Ceph Dashboard installation and access As a system administrator, you can access the dashboard with the credentials provided on bootstrapping the cluster. Cephadm installs the dashboard by default. Following is an example of the dashboard URL: Note Update the browser and clear the cookies prior to accessing the dashboard URL. The following are the Cephadm bootstrap options that are available for the Ceph dashboard configurations: [-initial-dashboard-user INITIAL_DASHBOARD_USER ] - Use this option while bootstrapping to set initial-dashboard-user. [-initial-dashboard-password INITIAL_DASHBOARD_PASSWORD ] - Use this option while bootstrapping to set initial-dashboard-password. [-ssl-dashboard-port SSL_DASHBOARD_PORT ] - Use this option while bootstrapping to set custom dashboard port other than default 8443. [-dashboard-key DASHBOARD_KEY ] - Use this option while bootstrapping to set Custom key for SSL. [-dashboard-crt DASHBOARD_CRT ] - Use this option while bootstrapping to set Custom certificate for SSL. [-skip-dashboard] - Use this option while bootstrapping to deploy Ceph without dashboard. [-dashboard-password-noupdate] - Use this option while bootstrapping if you used above two options and don't want to reset password at the first time login. [-allow-fqdn-hostname] - Use this option while bootstrapping to allow hostname that is fully-qualified. [-skip-prepare-host] - Use this option while bootstrapping to skip preparing the host. Note To avoid connectivity issues with dashboard related external URL, use the fully qualified domain names (FQDN) for hostnames, for example, host01.ceph.redhat.com . Note Open the Grafana URL directly in the client internet browser and accept the security exception to see the graphs on the Ceph dashboard. Reload the browser to view the changes. Example Note While boostrapping the storage cluster using cephadm , you can use the --image option for either custom container images or local container images. Note You have to change the password the first time you log into the dashboard with the credentials provided on bootstrapping only if --dashboard-password-noupdate option is not used while bootstrapping. You can find the Ceph dashboard credentials in the var/log/ceph/cephadm.log file. Search with the "Ceph Dashboard is now available at" string. This section covers the following tasks: Network port requirements for Ceph dashboard . Accessing the Ceph dashboard . Expanding the cluster on the Ceph dashboard . Upgrading a cluster . Toggling Ceph dashboard features . Understanding the landing page of the Ceph dashboard . Enabling Red Hat Ceph Storage Dashboard manually . Changing the dashboard password using the Ceph dashboard . Changing the Ceph dashboard password using the command line interface . Setting admin user password for Grafana . Creating an admin account for syncing users to the Ceph dashboard . Syncing users to the Ceph dashboard using the Red Hat Single Sign-On . Enabling single sign-on for the Ceph dashboard . Disabling single sign-on for the Ceph dashboard . 2.1. Network port requirements for Ceph Dashboard The Ceph dashboard components use certain TCP network ports which must be accessible. By default, the network ports are automatically opened in firewalld during installation of Red Hat Ceph Storage. Table 2.1. TCP Port Requirements Port Use Originating Host Destination Host 8443 The dashboard web interface IP addresses that need access to Ceph Dashboard UI and the host under Grafana server, since the AlertManager service can also initiate connections to the Dashboard for reporting alerts. The Ceph Manager hosts. 3000 Grafana IP addresses that need access to Grafana Dashboard UI and all Ceph Manager hosts and Grafana server. The host or hosts running Grafana server. 2049 NFS-Ganesha IP addresses that need access to NFS. The IP addresses that provide NFS services. 9095 Default Prometheus server for basic Prometheus graphs IP addresses that need access to Prometheus UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. The host or hosts running Prometheus. 9093 Prometheus Alertmanager IP addresses that need access to Alertmanager Web UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. All Ceph Manager hosts and the host under Grafana server. 9094 Prometheus Alertmanager for configuring a highly available cluster made from multiple instances All Ceph Manager hosts and the host under Grafana server. Prometheus Alertmanager High Availability (peer daemon sync), so both src and dst should be hosts running Prometheus Alertmanager. 9100 The Prometheus node-exporter daemon Hosts running Prometheus that need to view Node Exporter metrics Web UI and All Ceph Manager hosts and Grafana server or Hosts running Prometheus. All storage cluster hosts, including MONs, OSDS, Grafana server host. 9283 Ceph Manager Prometheus exporter module Hosts running Prometheus that need access to Ceph Exporter metrics Web UI and Grafana server. All Ceph Manager hosts. Additional Resources For more information, see the Red Hat Ceph Storage Installation Guide . For more information, see Using and configuring firewalls in Configuring and managing networking . 2.2. Accessing the Ceph dashboard You can access the Ceph dashboard to administer and monitor your Red Hat Ceph Storage cluster. Prerequisites Successful installation of Red Hat Ceph Storage Dashboard. NTP is synchronizing clocks properly. Procedure Enter the following URL in a web browser: Syntax Replace: HOST_NAME with the fully qualified domain name (FQDN) of the active manager host. PORT with port 8443 Example You can also get the URL of the dashboard by running the following command in the Cephadm shell: Example This command will show you all endpoints that are currently configured. Look for the dashboard key to obtain the URL for accessing the dashboard. On the login page, enter the username admin and the default password provided during bootstrapping. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. After logging in, the dashboard default landing page is displayed, which provides details, a high-level overview of status, performance, inventory, and capacity metrics of the Red Hat Ceph Storage cluster. Figure 2.1. Ceph dashboard landing page Click the menu icon ( ) on the dashboard landing page to collapse or display the options in the vertical menu. Additional Resources For more information, see Changing the dashboard password using the Ceph dashboard in the Red Hat Ceph Storage Dashboard guide . 2.3. Expanding the cluster on the Ceph dashboard You can use the dashboard to expand the Red Hat Ceph Storage cluster for adding hosts, adding OSDs, and creating services such as Alertmanager, Cephadm-exporter, CephFS-mirror, Grafana, ingress, MDS, NFS, node-exporter, Prometheus, RBD-mirror, and Ceph Object Gateway. Once you bootstrap a new storage cluster, the Ceph Monitor and Ceph Manager daemons are created and the cluster is in HEALTH_WARN state. After creating all the services for the cluster on the dashboard, the health of the cluster changes from HEALTH_WARN to HEALTH_OK status. Prerequisites Bootstrapped storage cluster. See Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide for more details. At least cluster-manager role for the user on the Red Hat Ceph Storage Dashboard. See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. Procedure Copy the admin key from the bootstrapped host to other hosts: Syntax Example Log in to the dashboard with the default credentials provided during bootstrap. Change the password and log in to the dashboard with the new password . On the landing page, click Expand Cluster . Note Clicking Expand Cluster opens a wizard taking you through the expansion steps. To skip and add hosts and services separately, click Skip . Figure 2.2. Expand cluster Add hosts. This needs to be done for each host in the storage cluster. In the Add Hosts step, click Add . Provide the hostname. This is same as the hostname that was provided while copying the key from the bootstrapped host. Note Add multiple hosts by using a comma-separated list of host names, a range expression, or a comma separated range expression. Optional: Provide the respective IP address of the host. Optional: Select the labels for the hosts on which the services are going to be created. Click the pencil icon to select or add new labels. Click Add Host . The new host is displayed in the Add Hosts pane. Click . Create OSDs: In the Create OSDs step, for Primary devices, Click Add . In the Primary Devices window, filter for the device and select the device. Click Add . Optional: In the Create OSDs window, if you have any shared devices such as WAL or DB devices, then add the devices. Optional: In the Features section, select Encryption to encrypt the features. Click . Create services: In the Create Services step, click Create . In the Create Service form: Select a service type. Provide the service ID. The ID is a unique name for the service. This ID is used in the service name, which is service_type.service_id . ... Optional: Select if the service is Unmanaged . + When Unmanaged services is selected, the orchestrator will not start or stop any daemon associated with this service. Placement and all other properties are ignored. Select if the placement is by hosts or label. Select the hosts. In the Count field, provide the number of daemons or services that need to be deployed. Click Create Service . The new service is displayed in the Create Services pane. In the Create Service window, Click . Review the cluster expansion details. Review the Cluster Resources , Hosts by Services , Host Details . To edit any parameters, click Back and follow the steps. Figure 2.3. Review cluster Click Expand Cluster . The Cluster expansion displayed notification is displayed and the cluster status changes to HEALTH_OK on the dashboard. Verification Log in to the cephadm shell: Example Run the ceph -s command. Example The health of the cluster is HEALTH_OK . Additional Resources See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the Red Hat Ceph Storage Installation Guide for more details. 2.4. Upgrading a cluster Upgrade Ceph clusters using the dashboard. Cluster images are pulled automatically from registry.redhat.io . Optionally, use custom images for upgrade. Prerequisites Verify that your upgrade version path and operating system is supported before starting the upgrade process. For more information, see Compatibility Matrix for Red Hat Ceph Storage 8.0 . Before you begin, make sure that you have the following prerequisites in place: Important These items cannot be done through the dashboard and must be completed manually, through the command-line interface, before continuing to upgrade the cluster from the dashboard. Note For detailed information, see Upgrading the Red Hat Ceph Storage cluster in a disconnected environment and complete steps 1 through 7. The latest cephadm . Syntax The latest cephadm-ansible . Syntax The latest cephadm pre-flight playbook . Syntax Run the following Ceph commands to avoid alerts and rebalancing of the data during the cluster upgrade: Syntax Procedure View if cluster upgrades are available and upgrade as needed from Administration > Upgrade on the dashboard. Note If the dashboard displays the Not retrieving upgrades message, check if the registries were added to the container configuration files with the appropriate log in credentials to Podman or docker. Click Pause or Stop during the upgrade process, if needed. The upgrade progress is shown in the progress bar along with information messages during the upgrade. Note When stopping the upgrade, the upgrade is first paused and then prompts you to stop the upgrade. Optional. View cluster logs during the upgrade process from the Cluster logs section of the Upgrade page. Verify that the upgrade is completed successfully by confirming that the cluster status displays OK state. After verifying that the upgrade is complete, unset the noout , noscrub , and nodeep-scrub flags. Example 2.5. Toggling Ceph dashboard features You can customize the Red Hat Ceph Storage dashboard components by enabling or disabling features on demand. All features are enabled by default. When disabling a feature, the web-interface elements become hidden and the associated REST API end-points reject any further requests for that feature. Enabling and disabling dashboard features can be done from the command-line interface or the web interface. Available features: Ceph Block Devices: Image management, rbd Mirroring, mirroring Ceph File System, cephfs Ceph Object Gateway, rgw NFS Ganesha gateway, nfs Note By default, the Ceph Manager is collocated with the Ceph Monitor. Note You can disable multiple features at once. Important Once a feature is disabled, it can take up to 20 seconds to reflect the change in the web interface. Prerequisites Installation and configuration of the Red Hat Ceph Storage dashboard software. User access to the Ceph Manager host or the dashboard web interface. Root level access to the Ceph Manager host. Procedure To toggle the dashboard features from the dashboard web interface: On the dashboard landing page, go to Administration->Manager Modules and select the dashboard module. Click Edit . In the Edit Manager module form, you can enable or disable the dashboard features by selecting or clearing the check boxes to the different feature names. After the selections are made, click Update . To toggle the dashboard features from the command-line interface: Log in to the Cephadm shell: Example List the feature status: Example Disable a feature: This example disables the Ceph Object Gateway feature. Enable a feature: This example enables the Ceph Filesystem feature. 2.6. Understanding the landing page of the Ceph dashboard The landing page displays an overview of the entire Ceph cluster using navigation bars and individual panels. The menu bar provides the following options: Tasks and Notifications Provides task and notification messages. Help Provides links to the product and REST API documentation, details about the Red Hat Ceph Storage Dashboard, and a form to report an issue. Dashboard Settings Gives access to user management and telemetry configuration. User Use this menu to see log in status, to change a password, and to sign out of the dashboard. Figure 2.4. Menu bar The navigation menu can be opened or hidden by clicking the navigation menu icon . Dashboard The main dashboard displays specific information about the state of the cluster. The main dashboard can be accessed at any time by clicking Dashboard from the navigation menu. The dashboard landing page organizes the panes into different categories. Figure 2.5. Ceph dashboard landing page Details Displays specific cluster information and if telemetry is active or inactive. Inventory Displays the different parts of the cluster, how many are available, and their status. Link directly from Inventory to specific inventory items, where available. Hosts Displays the total number of hosts in the Ceph storage cluster. Monitors Displays the number of Ceph Monitors and the quorum status. Managers Displays the number and status of the Manager Daemons. OSDs Displays the total number of OSDs in the Ceph Storage cluster and the number that are up , and in . Pools Displays the number of storage pools in the Ceph cluster. PGs Displays the total number of placement groups (PGs). The PG states are divided into Working and Warning to simplify the display. Each one encompasses multiple states. + The Working state includes PGs with any of the following states: activating backfill_wait backfilling creating deep degraded forced_backfill forced_recovery peering peered recovering recovery_wait repair scrubbing snaptrim snaptrim_wait + The Warning state includes PGs with any of the following states: backfill_toofull backfill_unfound down incomplete inconsistent recovery_toofull recovery_unfound remapped snaptrim_error stale undersized Object Gateways Displays the number of Object Gateways in the Ceph storage cluster. Metadata Servers Displays the number and status of metadata servers for Ceph File Systems (CephFS). Status Displays the health of the cluster and host and daemon states. The current health status of the Ceph storage cluster is displayed. Danger and warning alerts are displayed directly on the landing page. Click View alerts for a full list of alerts. Capacity Displays storage usage metrics. This is displayed as a graph of used, warning, and danger. The numbers are in percentages and in GiB. Cluster Utilization The Cluster Utilization pane displays information related to data transfer speeds. Select the time range for the data output from the list. Select a range between the last 5 minutes to the last 24 hours. Used Capacity (RAW) Displays usage in GiB. IOPS Displays total I/O read and write operations per second. OSD Latencies Displays total applies and commits per millisecond. Client Throughput Displays total client read and write throughput in KiB per second. Recovery Throughput Displays the rate of cluster healing and balancing operations. For example, the status of any background data that may be moving due to a loss of disk is displayed. The information is displayed in bytes per second. Additional Resources For more information, see Monitoring the cluster on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. 2.7. Changing the dashboard password using the Ceph dashboard By default, the password for accessing dashboard is randomly generated by the system while bootstrapping the cluster. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. You can change the password for the admin user using the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log in to the dashboard: Syntax Go to User->Change password on the menu bar. Enter the old password, for verification. In the New password field enter a new password. Passwords must contain a minimum of 8 characters and cannot be the same as the last one. In the Confirm password field, enter the new password again to confirm. Click Change Password . You will be logged out and redirected to the login screen. A notification appears confirming the password is changed. 2.8. Changing the Ceph dashboard password using the command line interface If you have forgotten your Ceph dashboard password, you can change the password using the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the host on which the dashboard is installed. Procedure Log into the Cephadm shell: Example Create the dashboard_password.yml file: Example Edit the file and add the new dashboard password: Example Reset the dashboard password: Syntax Example Verification Log in to the dashboard with your new password. 2.9. Setting admin user password for Grafana By default, cephadm does not create an admin user for Grafana. With the Ceph Orchestrator, you can create an admin user and set the password. With these credentials, you can log in to the storage cluster's Grafana URL with the given password for the admin user. Prerequisites A running Red Hat Ceph Storage cluster with the monitoring stack installed. Root-level access to the cephadm host. The dashboard module enabled. Procedure As a root user, create a grafana.yml file and provide the following details: Syntax Example Mount the grafana.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Optional: Check if the dashboard Ceph Manager module is enabled: Example Optional: Enable the dashboard Ceph Manager module: Example Apply the specification using the orch command: Syntax Example Redeploy grafana service: Example This creates an admin user called admin with the given password and the user can log in to the Grafana URL with these credentials. Verification: Log in to Grafana with the credentials: Syntax Example 2.10. Enabling Red Hat Ceph Storage Dashboard manually If you have installed a Red Hat Ceph Storage cluster by using --skip-dashboard option during bootstrap, you can see that the dashboard URL and credentials are not available in the bootstrap output. You can enable the dashboard manually using the command-line interface. Although the monitoring stack components such as Prometheus, Grafana, Alertmanager, and node-exporter are deployed, they are disabled and you have to enable them manually. Prerequisite A running Red Hat Ceph Storage cluster installed with --skip-dashboard option during bootstrap. Root-level access to the host on which the dashboard needs to be enabled. Procedure Log into the Cephadm shell: Example Check the Ceph Manager services: Example You can see that the Dashboard URL is not configured. Enable the dashboard module: Example Create the self-signed certificate for the dashboard access: Example Note You can disable the certificate verification to avoid certification errors. Check the Ceph Manager services: Example Create the admin user and password to access the Red Hat Ceph Storage dashboard: Syntax Example Enable the monitoring stack. See the Enabling monitoring stack section in the Red Hat Ceph Storage Dashboard Guide for details. Additional Resources See the Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.11. Using single sign-on with the dashboard The Ceph Dashboard supports external authentication of users with the choice of either the Security Assertion Markup Language (SAML) 2.0 protocol or with the OAuth2 Proxy ( oauth2-proxy ). Before using single sign-on ( SSO ) with the Ceph Dashboard, create the dashboard user accounts and assign any required roles. The Ceph Dashboard completes user authorization and then the existing Identity Provider ( IdP ) completes the authentication process. You can enable single sign-on using the SAML protocol or oauth2-proxy . Red Hat Ceph Storage supports dashboard SSO and Multi-Factor Authentication with RHSSO (Keycloak). OAuth2 SSO uses the oauth2-proxy service to work with the Ceph Management gateway ( mgmt-gateway ), providing unified access and improved user experience. Note The OAuth2 SSO, mgmt-gateway , and oauth2-proxy services are Technology Preview. For more information about the Ceph Management gateway and the OAuth2 Proxy service, see Using the Ceph Management gateway (mgmt-gateway) and Using the OAuth2 Proxy (oauth2-proxy) service. For more information about Red Hat build of Keycloack, see Red Hat build of Keycloak on the Red Hat Customer Portal . 2.11.1. Creating an admin account for syncing users to the Ceph dashboard You have to create an admin account to synchronize users to the Ceph dashboard. After creating the account, use Red Hat Single Sign-on (SSO) to synchronize users to the Ceph dashboard. See the Syncing users to the Ceph dashboard using Red Hat Single Sign-On section in the Red Hat Ceph Storage Dashboard Guide . Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. Root-level access on all the hosts. Java OpenJDK installed. For more information, see the Installing a JRE on RHEL by using yum section of the Installing and using OpenJDK 8 for RHEL guide for OpenJDK on the Red Hat Customer Portal. Red hat Single Sign-On installed from a ZIP file. See the Installing RH-SSO from a ZIP File section of the Server Installation and Configuration Guide for Red Hat Single Sign-On on the Red Hat Customer Portal. Procedure Download the Red Hat Single Sign-On 7.4.0 Server on the system where Red Hat Ceph Storage is installed. Unzip the folder: Navigate to the standalone/configuration directory and open the standalone.xml for editing: From the bin directory of the newly created rhsso-7.4.0 folder, run the add-user-keycloak script to add the initial administrator user: Replace all instances of localhost and two instances of 127.0.0.1 with the IP address of the machine where Red Hat SSO is installed. Start the server. From the bin directory of rh-sso-7.4 folder, run the standalone boot script: Create the admin account in https: IP_ADDRESS :8080/auth with a username and password: Note You have to create an admin account only the first time that you log into the console. Log into the admin console with the credentials created. Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. For creating users on the dashboard, see the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 2.11.2. Syncing users to the Ceph dashboard using Red Hat Single Sign-On You can use Red Hat Single Sign-on (SSO) with Lightweight Directory Access Protocol (LDAP) integration to synchronize users to the Red Hat Ceph Storage Dashboard. The users are added to specific realms in which they can access the dashboard through SSO without any additional requirements of a password. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. See the Creating users on Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Root-level access on all the hosts. Admin account created for syncing users. See the Creating an admin account for syncing users to the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Procedure To create a realm, click the Master drop-down menu. In this realm, you can provide access to users and applications. In the Add Realm window, enter a case-sensitive realm name and set the parameter Enabled to ON and click Create : In the Realm Settings tab, set the following parameters and click Save : Enabled - ON User-Managed Access - ON Make a note of the link address of SAML 2.0 Identity Provider Metadata to paste in Client Settings . In the Clients tab, click Create : In the Add Client window, set the following parameters and click Save : Client ID - BASE_URL:8443/auth/saml2/metadata Example https://example.ceph.redhat.com:8443/auth/saml2/metadata Client Protocol - saml In the Client window, under Settings tab, set the following parameters: Table 2.2. Client Settings tab Name of the parameter Syntax Example Client ID BASE_URL:8443/auth/saml2/metadata https://example.ceph.redhat.com:8443/auth/saml2/metadata Enabled ON ON Client Protocol saml saml Include AuthnStatement ON ON Sign Documents ON ON Signature Algorithm RSA_SHA1 RSA_SHA1 SAML Signature Key Name KEY_ID KEY_ID Valid Redirect URLs BASE_URL:8443/* https://example.ceph.redhat.com:8443/* Base URL BASE_URL:8443 https://example.ceph.redhat.com:8443/ Master SAML Processing URL https://localhost:8080/auth/realms/ REALM_NAME /protocol/saml/descriptor https://localhost:8080/auth/realms/Ceph_LDAP/protocol/saml/descriptor Note Paste the link of SAML 2.0 Identity Provider Metadata from Realm Settings tab. Under Fine Grain SAML Endpoint Configuration, set the following parameters and click Save : Table 2.3. Fine Grain SAML configuration Name of the parameter Syntax Example Assertion Consumer Service POST Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Assertion Consumer Service Redirect Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Logout Service Redirect Binding URL BASE_URL:8443/ https://example.ceph.redhat.com:8443/ In the Clients window, Mappers tab, set the following parameters and click Save : Table 2.4. Client Mappers tab Name of the parameter Value Protocol saml Name username Mapper Property User Property Property username SAML Attribute name username In the Clients Scope tab, select role_list : In Mappers tab, select role list , set the Single Role Attribute to ON. Select User_Federation tab: In User Federation window, select ldap from the drop-down menu: In User_Federation window, Settings tab, set the following parameters and click Save : Table 2.5. User Federation Settings tab Name of the parameter Value Console Display Name rh-ldap Import Users ON Edit_Mode READ_ONLY Username LDAP attribute username RDN LDAP attribute username UUID LDAP attribute nsuniqueid User Object Classes inetOrgPerson, organizationalPerson, rhatPerson Connection URL Example: ldap://ldap.corp.redhat.com Click Test Connection . You will get a notification that the LDAP connection is successful. Users DN ou=users, dc=example, dc=com Bind Type simple Click Test authentication . You will get a notification that the LDAP authentication is successful. In Mappers tab, select first name row and edit the following parameter and Click Save : LDAP Attribute - givenName In User_Federation tab, Settings tab, Click Synchronize all users : You will get a notification that the sync of users is finished successfully. In the Users tab, search for the user added to the dashboard and click the Search icon: To view the user , click the specific row. You should see the federation link as the name provided for the User Federation . Important Do not add users manually as the users will not be synchronized by LDAP. If added manually, delete the user by clicking Delete . Note If Red Hat SSO is currently being used within your work environment, be sure to first enable SSO. For more information, see the Enabling Single Sign-On with SAMLE 2.0 for the Ceph Dashboard section in the Red Hat Ceph Storage Dashboard Guide . Verification Users added to the realm and the dashboard can access the Ceph dashboard with their mail address and password. Example https://example.ceph.redhat.com:8443 Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. 2.11.3. Enabling Single Sign-On with SAML 2.0 for the Ceph Dashboard The Ceph Dashboard supports external authentication of users with the Security Assertion Markup Language (SAML) 2.0 protocol. Before using single sign-On (SSO) with the Ceph dashboard, create the dashboard user accounts and assign the desired roles. The Ceph Dashboard performs authorization of the users and the authentication process is performed by an existing Identity Provider (IdP). You can enable single sign-on using the SAML protocol. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Procedure To configure SSO on Ceph Dashboard, run the following command: Syntax Example Replace CEPH_MGR_HOST with Ceph mgr host. For example, host01 CEPH_DASHBOARD_BASE_URL with the base URL where Ceph Dashboard is accessible. IDP_METADATA with the URL to remote or local path or content of the IdP metadata XML. The supported URL types are http, https, and file. Optional : IDP_USERNAME_ATTRIBUTE with the attribute used to get the username from the authentication response. Defaults to uid . Optional : IDP_ENTITY_ID with the IdP entity ID when more than one entity ID exists on the IdP metadata. Optional : SP_X_509_CERT with the file path of the certificate used by Ceph Dashboard for signing and encryption. Optional : SP_PRIVATE_KEY with the file path of the private key used by Ceph Dashboard for signing and encryption. Verify the current SAML 2.0 configuration: Syntax Example To enable SSO, run the following command: Syntax Example Open your dashboard URL. Example On the SSO page, enter the login credentials. SSO redirects to the dashboard web interface. Additional Resources To disable single sign-on, see Disabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . 2.11.4. Enabling OAuth2 single sign-on (Technology Preview) Enable OAuth2 single sign-on (SSO) for the Ceph Dashboard. OAuth2 SSO uses the oauth2-proxy service. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to the Ceph Manager hosts. An admin account with Red Hat Single-Sign on 7.6.0. For more information, see Creating an admin account with Red Hat Single Sign-On 7.6.0 . Enable the Ceph Management gateway ( mgmt-gateway ) service. For more information, see Enabling the Ceph Management gateway. Enable the OAuth2 Proxy service ( oauth2-proxy ). For more information, see Enabling the OAuth2 Proxy service. Procedure Enable Ceph Dashboard OAuth2 SSO access. Syntax Example Set the valid redirect URL. Syntax Note This URL must be the same redirect URL as configured in the OAuth2 Proxy service. Configure a valid user role. Note For the Administrator role, configure the IDP user with an administrator or read-only access. Open your dashboard URL. Example On the SSO page, enter the login credentials. The SSO redirects to the dashboard web interface. Verification Check the SSO status at any time with the cephadm shell ceph dashboard sso status command. Example 2.11.5. Disabling Single Sign-On for the Ceph Dashboard You can disable SAML 2.0 and OAuth2 SSO for the Ceph Dashboard at any time. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Single sign-on enabled for Ceph Dashboard Procedure To view status of SSO, run the following command: Syntax Example To disable SSO, run the following command: Syntax Example Additional Resources To enable single sign-on, see Enabling Single Sign-On with SAML 2.0 for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . | [
"URL: https://host01:8443/ User: admin Password: zbiql951ar",
"cephadm bootstrap --mon-ip 127.0.0.1 --registry-json cephadm.txt --initial-dashboard-user admin --initial-dashboard-password zbiql951ar --dashboard-password-noupdate --allow-fqdn-hostname",
"https:// HOST_NAME : PORT",
"https://host01:8443",
"ceph mgr services",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@ HOST_NAME",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"cephadm shell",
"ceph -s",
"dnf udpate cephadm",
"dnf udpate cephadm-ansible",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom upgrade_ceph_packages=true\"",
"ceph health mute DAEMON_OLD_VERSION --sticky ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"cephadm shell",
"ceph dashboard feature status",
"ceph dashboard feature disable rgw",
"ceph dashboard feature enable cephfs",
"https:// HOST_NAME :8443",
"cephadm shell",
"touch dashboard_password.yml",
"vi dashboard_password.yml",
"ceph dashboard ac-user-set-password DASHBOARD_USERNAME -i PASSWORD_FILE",
"ceph dashboard ac-user-set-password admin -i dashboard_password.yml {\"username\": \"admin\", \"password\": \"USD2bUSD12USDi5RmvN1PolR61Fay0mPgt.GDpcga1QpYsaHUbJfoqaHd1rfFFx7XS\", \"roles\": [\"administrator\"], \"name\": null, \"email\": null, \"lastUpdate\": , \"enabled\": true, \"pwdExpirationDate\": null, \"pwdUpdateRequired\": false}",
"service_type: grafana spec: initial_admin_password: PASSWORD",
"service_type: grafana spec: initial_admin_password: mypassword",
"cephadm shell --mount grafana.yml:/var/lib/ceph/grafana.yml",
"ceph mgr module ls",
"ceph mgr module enable dashboard",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i /var/lib/ceph/grafana.yml",
"ceph orch redeploy grafana",
"https:// HOST_NAME : PORT",
"https://host01:3000/",
"cephadm shell",
"ceph mgr services { \"prometheus\": \"http://10.8.0.101:9283/\" }",
"ceph mgr module enable dashboard",
"ceph dashboard create-self-signed-cert",
"ceph mgr services { \"dashboard\": \"https://10.8.0.101:8443/\", \"prometheus\": \"http://10.8.0.101:9283/\" }",
"echo -n \" PASSWORD \" > PASSWORD_FILE ceph dashboard ac-user-create admin -i PASSWORD_FILE administrator",
"echo -n \"p@ssw0rd\" > password.txt ceph dashboard ac-user-create admin -i password.txt administrator",
"unzip rhsso-7.4.0.zip",
"cd standalone/configuration vi standalone.xml",
"./add-user-keycloak.sh -u admin",
"./standalone.sh",
"cephadm shell CEPH_MGR_HOST ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY",
"cephadm shell host01 ceph dashboard sso setup saml2 https://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username https://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt",
"cephadm shell CEPH_MGR_HOST ceph dashboard sso show saml2",
"cephadm shell host01 ceph dashboard sso show saml2",
"cephadm shell CEPH_MGR_HOST ceph dashboard sso enable saml2 SSO is \"enabled\" with \"SAML2\" protocol.",
"cephadm shell host01 ceph dashboard sso enable saml2",
"https://dashboard_hostname.ceph.redhat.com:8443",
"ceph dashboard sso enable oauth2",
"ceph dashboard sso enable oauth2 SSO is \"enabled\" with \"oauth2\" protocol.",
"https:// HOST_NAME | IP_ADDRESS /oauth2/callback",
"https://dashboard_hostname.ceph.redhat.com:8443",
"cephadm shell ceph dashboard sso status SSO is \"enabled\" with \"oauth2\" protocol.",
"cephadm shell CEPH_MGR_HOST ceph dashboard sso status",
"cephadm shell host01 ceph dashboard sso status SSO is \"enabled\" with \"SAML2\" protocol.",
"cephadm shell CEPH_MGR_HOST ceph dashboard sso disable SSO is \"disabled\".",
"cephadm shell host01 ceph dashboard sso disable"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/dashboard_guide/ceph-dashboard-installation-and-access |
Chapter 5. Performing health checks on Red Hat Quay deployments | Chapter 5. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 5.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 5.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 5.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/troubleshooting_red_hat_quay/health-check-quay |
Chapter 13. Deploying a RHEL for Edge image in a network-based environment | Chapter 13. Deploying a RHEL for Edge image in a network-based environment You can deploy a RHEL for Edge image using the RHEL installer graphical user interface or a Kickstart file. The overall process for deploying a RHEL for Edge image depends on whether your deployment environment is network-based or non-network-based. Note To deploy the images on bare metal, use a Kickstart file. Network-based deployments Deploying a RHEL for Edge image in a network-based environment involves the following high-level steps: Extract the image file contents. Set up a web server Install the image 13.1. Extracting the RHEL for Edge image commit After you download the commit, extract the .tar file and note the ref name and the commit ID. The downloaded commit file consists of a .tar file with an OSTree repository. The OSTree repository has a commit and a compose.json file. The compose.json file has information metadata about the commit with information such as the "Ref", the reference ID and the commit ID. The commit ID has the RPM packages. To extract the package contents, perform the following the steps: Prerequisites Create a Kickstart file or use an existing one. Procedure Extract the downloaded image .tar file: Go to the directory where you have extracted the .tar file. It has a compose.json file and an OSTree directory. The compose.json file has the commit number and the OSTree directory has the RPM packages. Open the compose.json file and note the commit ID number. You need this number handy when you proceed to set up a web server. If you have the jq JSON processor installed, you can also retrieve the commit ID by using the jq tool: List the RPM packages in the commit. Use a Kickstart file to run the RHEL installer. Optionally, you can use any existing file or can create one by using the Kickstart Generator tool. In the Kickstart file, ensure that you include the details about how to provision the file system, create a user, and how to fetch and deploy the RHEL for Edge image. The RHEL installer uses this information during the installation process. The following is a Kickstart file example: The OStree-based installation uses the ostreesetup command to set up the configuration. It fetches the OSTree commit, by using the following flags: --nogpg - Disable GNU Privacy Guard (GPG) key verification. --osname - Management root for the operating system installation. --remote - Management root for the operating system installation --url - URL of the repository to install from. --ref - Name of the branch from the repository that the installation uses. --url=http://mirror.example.com/repo/ - is the address of the host system where you extracted the edge commit and served it over nginx . You can use the address to reach the host system from the guest computer. For example, if you extract the commit image in the /var/www/html directory and serve the commit over nginx on a computer whose hostname is www.example.com , the value of the --url parameter is http://www.example.com/repo . Note Use the http protocol to start a service to serve the commit, because https is not enabled on the Apache HTTP Server. Additional resources Downloading a RHEL for Edge image Creating Kickstart files 13.2. Setting up a web server to install RHEL for Edge images After you have extracted the RHEL for Edge image contents, set up a web server to provide the image commit details to the RHEL installer by using HTTP. The following example provides the steps to set up a web server by using a container. Prerequisites You have installed Podman on your system. See the Red Hat Knowledgebase solution How do I install Podman in RHEL . Procedure Create the nginx configuration file with the following instructions: Create a Dockerfile with the following instructions: Where, kickstart.ks is the name of the Kickstart file from the RHEL for Edge image. The Kickstart file includes directive information. To help you manage the images later, it is advisable to include the checks and settings for greenboot checks. For that, you can update the Kickstart file to include the following settings: Any HTTP service can host the OSTree repository, and the example, which uses a container, is just an option for how to do this. The Dockerfile performs the following tasks: Uses the latest Universal Base Image (UBI) Installs the web server (nginx) Adds the Kickstart file to the server Adds the RHEL for Edge image commit to the server Build a Docker container Run the container As a result, the server is set up and ready to start the RHEL Installer by using the commit.tar repository and the Kickstart file. 13.3. Performing an attended installation to an edge device by using Kickstart For an attended installation in a network-based environment, you can install the RHEL for Edge image to a device by using the RHEL Installer ISO, a Kickstart file, and a web server. The web server serves the RHEL for Edge Commit and the Kickstart file to boot the RHEL Installer ISO image. Prerequisites You have made the RHEL for Edge Commit available by running a web server. See Setting up a web server to install RHEL for Edge images . You have created a .qcow2 disk image to be used as the target of the attended installation. See Creating a virtual disk image by using qemu-img . Procedure Create a Kickstart file. The following is an example in which the ostreesetup directive instructs the Anaconda Installer to fetch and deploy the commit. Additionally, it creates a user and a password. Run the RHEL Anaconda Installer by using the libvirt virt-install utility to create a virtual machine (VM) with a RHEL operating system. Use the .qcow2 disk image as the target disk in the attended installation: On the installation screen: Figure 13.1. Red Hat Enterprise Linux boot menu Press the e key to add an additional kernel parameter: The kernel parameter specifies that you want to install RHEL by using the Kickstart file and not the RHEL image contained in the RHEL Installer. After adding the kernel parameters, press Ctrl + X to boot the RHEL installation by using the Kickstart file. The RHEL Installer starts, fetches the Kickstart file from the server (HTTP) endpoint and executes the commands, including the command to install the RHEL for Edge image commit from the HTTP endpoint. After the installation completes, the RHEL Installer prompts you for login details. Verification On the Login screen, enter your user account credentials and click Enter . Verify whether the RHEL for Edge image is successfully installed. USD rpm-ostree status The command output provides the image commit ID and shows that the installation is successful. Following is a sample output: Additional resources How to embed a Kickstart file into an ISO image (Red Hat Knowledgebase) Booting the installation 13.4. Performing an unattended installation to an edge device by using Kickstart For an unattended installation in a network-based environment, you can install the RHEL for Edge image to an Edge device by using a Kickstart file and a web server. The web server serves the RHEL for Edge Commit and the Kickstart file, and both artifacts are used to start the RHEL Installer ISO image. Prerequisites You have the qemu-img utility installed on your host system. You have created a .qcow2 disk image to install the commit you created. See Creating a system image with RHEL image builder in the CLI . You have a running web server. See Creating a RHEL for Edge Container image for non-network-based deployments . Procedure Run a RHEL for Edge Container image to start a web server. The server fetches the commit in the RHEL for Edge Container image and becomes available and running. Run the RHEL Anaconda Installer, passing the customized .qcow2 disk image, by using libvirt virt-install : On the installation screen: Figure 13.2. Red Hat Enterprise Linux boot menu Press the TAB key and add the Kickstart kernel argument: The kernel parameter specifies that you want to install RHEL by using the Kickstart file and not the RHEL image contained in the RHEL Installer. After adding the kernel parameters, press Ctrl + X to boot the RHEL installation by using the Kickstart file. The RHEL Installer starts, fetches the Kickstart file from the server (HTTP) endpoint, and executes the commands, including the command to install the RHEL for Edge image commit from the HTTP endpoint. After the installation completes, the RHEL Installer prompts you for login details. Verification On the Login screen, enter your user account credentials and click Enter . Verify whether the RHEL for Edge image is successfully installed. The command output provides the image commit ID and shows that the installation is successful. The following is a sample output: Additional resources How to embed a Kickstart file into an ISO image (Red Hat Knowledgebase) Booting the installation | [
"tar xvf <UUID> -commit.tar",
"jq '.[\"ostree-commit\"]' < compose.json",
"rpm-ostree db list rhel/9/x86_64/edge --repo=repo",
"lang en_US.UTF-8 keyboard us timezone Etc/UTC --isUtc text zerombr clearpart --all --initlabel autopart reboot user --name=core --group=wheel sshkey --username=core \"ssh-rsa AAAA3Nza... .\" rootpw --lock network --bootproto=dhcp ostreesetup --nogpg --osname=rhel --remote=edge --url=https://mirror.example.com/repo/ --ref=rhel/9/x86_64/edge",
"events { } http { server{ listen 8080; root /usr/share/nginx/html; } } pid /run/nginx.pid; daemon off;",
"FROM registry.access.redhat.com/ubi8/ubi RUN dnf -y install nginx && dnf clean all COPY kickstart.ks /usr/share/nginx/html/ COPY repo /usr/share/nginx/html/ COPY nginx /etc/nginx.conf EXPOSE 8080 CMD [\"/usr/sbin/nginx\", \"-c\", \"/etc/nginx.conf\"] ARG commit ADD USD{commit} /usr/share/nginx/html/",
"lang en_US.UTF-8 keyboard us timezone Etc/UTC --isUtc text zerombr clearpart --all --initlabel autopart reboot user --name=core --group=wheel sshkey --username=core \"ssh-rsa AAAA3Nza... .\" ostreesetup --nogpg --osname=rhel --remote=edge --url=https://mirror.example.com/repo/ --ref=rhel/9/x86_64/edge %post cat << EOF > /etc/greenboot/check/required.d/check-dns.sh #!/bin/bash DNS_SERVER=USD(grep nameserver /etc/resolv.conf | cut -f2 -d\" \") COUNT=0 check DNS server is available ping -c1 USDDNS_SERVER while [ USD? != '0' ] && [ USDCOUNT -lt 10 ]; do COUNT++ echo \"Checking for DNS: Attempt USDCOUNT .\" sleep 10 ping -c 1 USDDNS_SERVER done EOF %end",
"podman build -t name-of-container-image --build-arg commit= uuid -commit.tar .",
"podman run --rm -d -p port :8080 localhost/ name-of-container-image",
"lang en_US.UTF-8 keyboard us timezone UTC zerombr clearpart --all --initlabel autopart --type=plain --fstype=xfs --nohome reboot text network --bootproto=dhcp user --name=core --groups=wheel --password=edge services --enabled=ostree-remount ostreesetup --nogpg --url=http://edge_device_ip:port/repo/ --osname=rhel --remote=edge --ref=rhel/9/x86_64/edge",
"virt-install --name rhel-edge-test-1 --memory 2048 --vcpus 2 --disk path=prepared_disk_image.qcow2,format=qcow2,size=8 --os-variant rhel9 --cdrom /home/username/Downloads/rhel-9-x86_64-boot.iso",
"inst.ks=http://web-server_device_ip:port/kickstart.ks",
"rpm-ostree status",
"State: idle Deployments: * ostree://edge:rhel/9/x86_64/edge Timestamp: 2020-09-18T20:06:54Z Commit: 836e637095554e0b634a0a48ea05c75280519dd6576a392635e6fa7d4d5e96",
"virt-install --name rhel-edge-test-1 --memory 2048 --vcpus 2 --disk path=prepared_disk_image.qcow2,format=qcow2,size=8 --os-variant rhel9 --cdrom /home/username/Downloads/rhel-9-x86_64-boot.iso",
"inst.ks=http://web-server_device_ip:port/kickstart.ks",
"rpm-ostree status",
"State: idle Deployments: * ostree://edge:rhel/9/x86_64/edge Timestamp: 2020-09-18T20:06:54Z Commit: 836e637095554e0b634a0a48ea05c75280519dd6576a392635e6fa7d4d5e96"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/installing-rpm-ostree-images_composing-installing-managing-rhel-for-edge-images |
Chapter 2. Configuring the rdma service | Chapter 2. Configuring the rdma service With the Remote Direct Memory Access (RDMA) protocol, you can transfer data between the RDMA enabled systems over the network by using the main memory. The RDMA protocol provides low latency and high throughput. To manage supported network protocols and communication standards, you need to configure the rdma service. This configuration includes high speed network protocols such as RoCE and iWARP, and communication standards such as Soft-RoCE and Soft-iWARP. When Red Hat Enterprise Linux detects InfiniBand, iWARP, or RoCE devices and their configuration files residing at the /etc/rdma/modules/* directory, the udev device manager instructs systemd to start the rdma service. Configuration of modules in the /etc/rdma/modules/rdma.conf file remains persistent after reboot. You need to restart the [email protected] configuration service to apply changes. Procedure Install the rdma-core package: Edit the /etc/rdma/modules/rdma.conf file and uncomment the modules that you want to enable: Restart the service to make the changes effective: Verification Install the libibverbs-utils and infiniband-diags packages: List the available InfiniBand devices: Display the information of the mlx4_1 device: Display the status of the mlx4_1 device: The ibping utility pings an InfiniBand address and runs as a client/server by configuring the parameters. Start server mode -S on port number -P with -C InfiniBand channel adapter (CA) name on the host: Start client mode, send some packets -c on port number -P by using -C InfiniBand channel adapter (CA) name with -L Local Identifier (LID) on the host: | [
"dnf install rdma-core",
"These modules are loaded by the system if any RDMA devices is installed iSCSI over RDMA client support ib_iser iSCSI over RDMA target support ib_isert SCSI RDMA Protocol target driver ib_srpt User access to RDMA verbs (supports libibverbs) ib_uverbs User access to RDMA connection management (supports librdmacm) rdma_ucm RDS over RDMA support rds_rdma NFS over RDMA client support xprtrdma NFS over RDMA server support svcrdma",
"systemctl restart <[email protected]>",
"dnf install libibverbs-utils infiniband-diags",
"ibv_devices device node GUID ------ ---------------- mlx4_0 0002c903003178f0 mlx4_1 f4521403007bcba0",
"ibv_devinfo -d mlx4_1 hca_id: mlx4_1 transport: InfiniBand (0) fw_ver: 2.30.8000 node_guid: f452:1403:007b:cba0 sys_image_guid: f452:1403:007b:cba3 vendor_id: 0x02c9 vendor_part_id: 4099 hw_ver: 0x0 board_id: MT_1090120019 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 2048 (4) sm_lid: 2 port_lid: 2 port_lmc: 0x01 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet",
"ibstat mlx4_1 CA 'mlx4_1' CA type: MT4099 Number of ports: 2 Firmware version: 2.30.8000 Hardware version: 0 Node GUID: 0xf4521403007bcba0 System image GUID: 0xf4521403007bcba3 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 2 LMC: 1 SM lid: 2 Capability mask: 0x0251486a Port GUID: 0xf4521403007bcba1 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0xf65214fffe7bcba2 Link layer: Ethernet",
"ibping -S -C mlx4_1 -P 1",
"ibping -c 50 -C mlx4_0 -P 1 -L 2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/configuring-the-rdma-service_configuring-infiniband-and-rdma-networks |
Chapter 4. Using Red Hat Gluster Storage in the Google Cloud Platform | Chapter 4. Using Red Hat Gluster Storage in the Google Cloud Platform Red Hat Gluster Storage provides support to the data needs of cloud-scale applications on Google Cloud Platform (GCP). Red Hat Gluster Storage provides software-defined file storage solution to run on GCP so that customer's applications can use traditional file interfaces with scale-out flexibility and performance. At the core of the Red Hat Gluster Storage design is a completely new method of architecting storage. The result is a system that has immense scalability, is highly resilient, and offers extraordinary performance. Google Cloud Platform Overview The Google Cloud Platform is Google's public cloud offering, which provides many services to run a fully integrated cloud-based environment. The Google Compute Engine is what drives and manages the virtual machine environment. This chapter is based on this virtual machine infrastructure . This virtual framework provides networking, storage, and virtual machines to scale out the Red Hat Gluster Storage environment to meet the demands of the specified workload. For more information on Google Cloud Platform, see https://cloud.google.com , and for information on the Google Compute Engine, see https://cloud.google.com/compute/docs . The following diagram illustrates Google Cloud Platform integration with Red Hat Gluster Storage. Figure 4.1. Integration Architecture For more information on Red Hat Gluster Storage architecture, concepts, and implementation, see Red Hat Gluster Storage Administration Guide . This chapter describes the steps necessary to deploy a Red Hat Gluster Storage environment to Google Cloud Platform using 10 x 2 Distribute-Replicate volume. 4.1. Planning your Deployment This chapter models a 100 TB distributed and replicated file system space. The application server model, which is a Red Hat Gluster Storage client, includes 10 virtual machine instances running a streaming video capture and retrieval simulation. This simulation provides a mixed workload representative of I/O patterns that may be common among other common use cases where a distributed storage system may be most suitable. While this scale allows us to model a high-end simulation of storage capacity and intensity of client activity, a minimum viable implementation may be achieved at a significantly smaller scale. As the model is scaled down your individual requirements and use cases are considered, certain fundamental approaches of this architecture should be taken into account, such as instance sizing, synchronous replication across zones, careful isolation of failure domains, and asynchronous replication to a remote geographical site. Maximum Persistent Disk Size The original test build was limited by the maximum per-VM persistent disk size of 10 TB. Google has since increased that limit to 64 TB. Red Hat will support persistent disks per VM up to Google's current maximum size of 64 TB. (Note that 64 TB is both a per-disk and a per-VM maximum, so the actual data disk maximum will be 64 TB minus the operating system disk size.) Other real-world use cases may involve significantly more client connections than represented in this chapter. While the particular study performed here was limited in client scale due to a focus on server and storage scale, some basic throughput tests showed the linear scale capabilities of the storage system. As always, your own design should be tuned to your particular use case and tested for performance and scale limitations. 4.1.1. Environment The scale target is roughly 100 TB of usable storage, with 2-way synchronous replication between zones in the primary pool, and additionally remote asynchronous geo-replication to a secondary pool in another region for disaster recovery. As of this writing, the current maximum size of a Google Compute Engine persistent disk is 10 TB, therefore our design requires 20 bricks for the primary pool and 10 bricks for the secondary pool. The secondary pool will have single data copies which are not synchronously replicated. Note that there is also currently a per-VM limit of 10 TB of persistent disk, so the actual data disk will be configured at 10,220 GB in order to account for the 20 GB root volume persistent disk. All nodes will use a Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 image that will be manually created and configured with a local virtualization system, that is KVM. Red Hat Gluster Storage replica peers in the local region are placed in separate zones within each region. This allows our synchronous replica copies to be highly available in the case of a zone outage. The Red Hat Gluster Storage server nodes are built as n1-highmem-4 machine types. This machine type is the minimally viable configuration based on the published resource requirements for Red Hat Gluster Storage. Some concession has been made for the minimum memory size based on expected cloud use cases. The n1-highmem-8 machine type may be a more appropriate match, depending on your application and specific needs. 4.1.2. Prerequisites Google account Google Cloud SDK. The Google Cloud SDK contains tools and libraries that enable you to easily create and manage resources on Google Cloud Platform. It will be used later to facilitate the creation of the multiple Red Hat Gluster Storage instances . For instructions to set up and install the Google Cloud SDK, see https://cloud.google.com/sdk . Subscription to access the Red Hat Gluster Storage software channels. For information on subscribing to the Red Hat Gluster Storage 3.5 channels, refer to the Installing Red Hat Gluster Storage chapter in the Red Hat Gluster Storage 3.5 Installation Guide . Minimum required number of nodes is 3. For compatible physical server, virtual server and client OS platforms , refer https://access.redhat.com/articles/66206 . 4.1.3. Primary Storage Pool Configuration Red Hat Gluster Storage configured in a 10 x 2 Distribute-Replicate volume 20 x n1-highmem-4 instances: Resource Specification vCPU 4 Memory 26 GB Boot Disk 20 GB standard persistent disk Data Disk 10,220 GB standard persistent disk. The maximum persistent disk allocation for a single instance is 10 TB. Therefore the maximum size of our data disk is necessarily 10 TB minus the 20 GB size of the boot disk, or 10,220 GB. Image Custom Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 VM zone allocation: Each Gluster synchronous replica pair is placed across zones in order to limit the impact of a zone failure. A single zone failure will not result in a loss of data access. Note that the setting synchronous replica pairs is a function of the order the bricks defined in the gluster volume create command. 4.1.4. Secondary Storage Pool Configuration Gluster configured in a 10 x 1 Distribute volume 10 x n1-highmem-4 instances: Resource Specification vCPU 4 Memory 24 GB Boot Disk 20 GB standard persistent disk Data Disk 10,220 GB standard persistent disk Image Custom Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 VM zone allocation: The secondary storage pool as designed as a receiver of asynchronous replication, via geo-replication, in a remote region for disaster recovery. To limit the cost of this protective layer, this storage pool is not synchronously replicated within its local region and a distribute-only gluster volume is used. In order to limit the potential impact of an outage, all nodes in this region are placed in the same zone. 4.1.5. Client Configuration Client VMs have been distributed as evenly as possible across the US-CENTRAL1 region, zones A and B. 10 x n1-standard-2 instances: Resource Specification vCPU 2 Memory 7.5 GB Boot Disk 10 GB standard persistent disk Image Custom Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 4.1.6. Trusted Pool Topology 4.1.7. Obtaining Red Hat Gluster Storage for Google Cloud Platform To download the Red Hat Gluster Storage Server files using a Red Hat Subscription or a Red Hat Evaluation Subscription: Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to visit the Software & Download Center . In the Red Hat Gluster Storage Server area, click Download Software to download the latest version of the qcow2 image. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/chap-Documentation-Deployment_Guide_for_Public_Cloud-Google_Cloud_Platform |
Chapter 4. Configuring and setting up remote jobs | Chapter 4. Configuring and setting up remote jobs Red Hat Satellite supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously. 4.1. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts from Capsules by using shell scripts or Ansible roles and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. For more information, see Section 4.4, "Transport modes for remote execution" . To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in Managing hosts . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . 4.2. Remote execution workflow For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your Capsule Server. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the Ansible feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job by using the Capsule to which the host is registered. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 4.3. Permissions for remote execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in Administering Red Hat Satellite . The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 4.4. Transport modes for remote execution You can configure your Satellite to use two different modes of transport for remote job execution. You can configure single Capsule to use either one mode or the other but not both. Push-based transport On Capsules in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution Capsule must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22. This transport mode supports both Script and Ansible providers. Pull-based transport On Capsules in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to initiate the job execution it receives from Satellite Server. The host subscribes to the MQTT broker on Capsule for job notifications by using the yggdrasil pull client. After the host receives a notification from the MQTT broker, it pulls job details from Capsule over HTTPS, runs the job, and reports results back to Capsule. This transport mode supports the Script provider only. To use the pull-mqtt mode, you must enable it on Capsule Server and configure the pull client on hosts. Note If your Capsule already uses the pull-mqtt mode and you want to switch back to the ssh mode, run this satellite-installer command: Additional resources To enable pull mode on Capsule Server, see Configuring pull-based transport for remote execution in Installing Capsule Server . To enable pull mode on a registered host, continue with Section 4.5, "Configuring a host to use the pull client" . To enable pull mode on a new host, continue with the following in Managing hosts : Creating a Host Registering Hosts 4.5. Configuring a host to use the pull client For Capsules configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their Capsule Server. Prerequisites You have registered the host to Satellite. The Capsule through which the host is registered is configured to use pull-mqtt mode. For more information, see Configuring pull-based transport for remote execution in Installing Capsule Server . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . The host can communicate with its Capsule over MQTT using port 1883 . The host can communicate with its Capsule over HTTPS. Procedure Install the katello-pull-transport-migrate package on your host: On Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 hosts: On Red Hat Enterprise Linux 7 hosts: The package installs foreman_ygg_worker and yggdrasil as dependencies, configures the yggdrasil client, and starts the pull client worker on the host. Verification Check the status of the yggdrasild service: 4.6. Creating a job template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing hosts . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to Satellite after a job finishes. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing hosts . CLI procedure To create a job template using a template-definition file, enter the following command: 4.7. Importing an Ansible Playbook by name You can import Ansible Playbooks by name to Satellite from collections installed on Capsule. Satellite creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Fetch the available Ansible Playbooks by using the following API request: Select the Ansible Playbook you want to import and note its name. Import the Ansible Playbook by its name: You get a notification in the Satellite web UI after the import completes. steps You can run the playbook by executing a remote job from the created job template. For more information, see Section 4.21, "Executing a remote job" . 4.8. Importing all available Ansible Playbooks You can import all the available Ansible Playbooks to Satellite from collections installed on Capsule. Satellite creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Import the Ansible Playbooks by using the following API request: You get a notification in the Satellite web UI after the import completes. steps You can run the playbooks by executing a remote job from the created job templates. For more information, see Section 4.21, "Executing a remote job" . 4.9. Configuring the fallback to any Capsule remote execution setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. To set the value to true , enter the following command: 4.10. Configuring the global Capsule remote execution setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. To set the value to true , enter the following command: 4.11. Configuring Satellite to use an alternative directory to execute remote jobs on hosts Ansible puts its own files it requires on the server side into the /tmp directory. You have the option to set a different directory if required. Procedure On your Satellite Server or Capsule Server, create a new directory: Copy the SELinux context from the default /tmp directory: Configure your Satellite Server or Capsule Server to use the new directory: 4.12. Altering the privilege elevation method By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo . You can globally configure an alternative method in your Satellite settings. Prerequisites Your user account has a role assigned that grants the view_settings and edit_settings permissions. If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation . Procedure Navigate to Administer > Settings . Select the Remote Execution tab. Click the value of the Effective User Method setting. Select the new value. Click Submit . 4.13. Distributing SSH keys for remote execution For Capsules in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from Capsule must be distributed to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 4.14, "Distributing SSH keys for remote execution manually" . Section 4.16, "Using the Satellite API to obtain SSH keys for remote execution" . Section 4.17, "Configuring a Kickstart template to distribute SSH keys during provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in Managing hosts . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts . 4.14. Distributing SSH keys for remote execution manually To distribute SSH keys manually, complete the following steps: Procedure Copy the SSH pub key from your Capsule to your target host: Repeat this step for each target host you want to manage. Verification To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 4.15. Adding a passphrase to SSH key used for remote execution By default, Capsule uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure. Procedure On your Satellite Server or Capsule Server, use ssh-keygen to add a passphrase to your SSH key: steps Users now must use a passphrase when running remote execution jobs on hosts. 4.16. Using the Satellite API to obtain SSH keys for remote execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 4.17. Configuring a Kickstart template to distribute SSH keys during provisioning You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 4.18. Configuring a keytab for Kerberos ticket granting tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 4.19. Configuring Kerberos authentication for remote execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. Verification To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing configurations by using Ansible integration . 4.20. Setting up job templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job in Managing hosts . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing hosts . Ansible considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible Playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible Playbooks in Satellite. For more information, see Synchronizing Repository Templates in Managing hosts . Parameter variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. 4.21. Executing a remote job You can execute a job that is based on a job template against one or more hosts. Note Ansible jobs run in batches on multiple hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible Playbook runs on all hosts in the batch. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select the Job category and the Job template you want to use, then click . Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. Note If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query: Replace My_Host_Group with the name of the top-level host group. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 4.22, "Advanced settings in the job wizard" . Click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. CLI procedure Enter the following command on Satellite: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern " . Additional resources For more information about creating, monitoring, or canceling remote jobs with Hammer CLI, enter hammer job-template --help and hammer job-invocation --help . 4.22. Advanced settings in the job wizard Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings. SSH user A user to be used for connecting to the host through SSH. Effective user A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts. If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true . This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by Satellite. If your SSH user and effective user are identical, Satellite does not overwrite the become_user . Therefore, you can set a custom become_user in your Ansible Playbook. Description A description template for the job. Timeout to kill Time in seconds from the start of the job after which the job should be killed if it is not finished already. Time to pickup Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport. Password Is used if SSH authentication method is a password instead of the SSH key. Private key passphrase Is used if SSH keys are protected by a passphrase. Effective user password Is used if effective user is different from the ssh user. Concurrency level Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts. Execution ordering Determines the order in which the job is executed on hosts. It can be alphabetical or randomized. 4.23. Using extended cron lines When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM. The extended cron line provides the following features: You can use # to specify a concrete week day in a month For example: 0 0 * * mon#1 specifies first Monday of the month 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month 0 7 * * fri#-1 specifies the last Friday of the month at 07:00 0 7 * * fri#L also specifies the last Friday of the month at 07:00 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00 You can use % to specify every n-th day of the month For example: 9 0 * * sun%2 specifies every other Sunday at 00:09 0 0 * * sun%2+1 specifies every odd Sunday 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday You can use & to specify that the day of the month has to match the day of the week For example: 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday 4.24. Scheduling a recurring Ansible job for a host You can schedule a recurring job to run Ansible roles on hosts. Prerequisites Ensure you have the view_foreman_tasks , view_job_invocations , and view_recurring_logics permissions. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 4.25. Scheduling a recurring Ansible job for a host group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 4.26. Using Ansible provider for package and errata actions By default, Satellite is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure Satellite to use them by default for remote execution features associated with them. Note Remember that Ansible job templates only work when remote execution is configured for ssh mode. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Find each feature whose name contains by_search . Change the job template for these features from Katello Script Default to Katello Ansible Default . Click Submit . Satellite now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions. This applies to job invocations from the Satellite web UI as well as by using hammer job-invocation create with the same remote execution features that you have changed. 4.27. Setting the job rate limit on Capsule You can limit the maximum number of active jobs on a Capsule at a time to prevent performance spikes. The job is active from the time Capsule first tries to notify the host about the job until the job is finished on the host. The job rate limit only applies to mqtt based jobs. Note The optimal maximum number of active jobs depends on the computing resources of your Capsule Server. By default, the maximum number of active jobs is unlimited. Procedure Set the maximum number of active jobs using satellite-installer : For example: | [
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh",
"dnf install katello-pull-transport-migrate",
"yum install katello-pull-transport-migrate",
"systemctl status yggdrasild",
"hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH",
"curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID",
"curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir /My_Remote_Working_Directory",
"chcon --reference=/tmp /My_Remote_Working_Directory",
"satellite-installer --foreman-proxy-plugin-ansible-working-dir /My_Remote_Working_Directory",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"",
"cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true",
"hostgroup_fullname ~ \" My_Host_Group *\"",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id My_Template_ID",
"hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/configuring_and_setting_up_remote_jobs_ansible |
Chapter 21. KIE Server REST API for KIE containers and business assets | Chapter 21. KIE Server REST API for KIE containers and business assets Red Hat Decision Manager provides a KIE Server REST API that you can use to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. This API support enables you to maintain your Red Hat Decision Manager resources more efficiently and optimize your integration and development with Red Hat Decision Manager. With the KIE Server REST API, you can perform the following actions: Deploy or dispose KIE containers Retrieve and update KIE container information Return KIE Server status and basic information Retrieve and update business asset information Execute business assets (such as rules and processes) KIE Server REST API requests require the following components: Authentication The KIE Server REST API requires HTTP Basic authentication or token-based authentication for the user role kie-server . To view configured user roles for your Red Hat Decision Manager distribution, navigate to ~/USDSERVER_HOME/standalone/configuration/application-roles.properties and ~/application-users.properties . To add a user with the kie-server role, navigate to ~/USDSERVER_HOME/bin and run the following command: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])" For more information about user roles and Red Hat Decision Manager installation options, see Planning a Red Hat Decision Manager installation . HTTP headers The KIE Server REST API requires the following HTTP headers for API requests: Accept : Data format accepted by your requesting client: application/json (JSON) application/xml (XML, for JAXB or XSTREAM) Content-Type : Data format of your POST or PUT API request data: application/json (JSON) application/xml (XML, for JAXB or XSTREAM) X-KIE-ContentType : Required header for application/xml XSTREAM API requests and responses: XSTREAM HTTP methods The KIE Server REST API supports the following HTTP methods for API requests: GET : Retrieves specified information from a specified resource endpoint POST : Updates a resource or resource instance PUT : Updates or creates a resource or resource instance DELETE : Deletes a resource or resource instance Base URL The base URL for KIE Server REST API requests is http://SERVER:PORT/kie-server/services/rest/ , such as http://localhost:8080/kie-server/services/rest/ . Endpoints KIE Server REST API endpoints, such as /server/containers/{containerId} for a specified KIE container, are the URIs that you append to the KIE Server REST API base URL to access the corresponding resource or type of resource in Red Hat Decision Manager. Example request URL for /server/containers/{containerId} endpoint http://localhost:8080/kie-server/services/rest/server/containers/MyContainer Request parameters and request data Many KIE Server REST API requests require specific parameters in the request URL path to identify or filter specific resources and to perform specific actions. You can append URL parameters to the endpoint in the format ?<PARAM>=<VALUE>&<PARAM>=<VALUE> . Example GET request URL with parameters http://localhost:8080/kie-server/services/rest/server/containers?groupId=com.redhat&artifactId=Project1&version=1.0&status=STARTED HTTP POST and PUT requests may additionally require a request body or file with data to accompany the request. Example POST request URL and JSON request body data http://localhost:8080/kie-server/services/rest/server/containers/MyContainer/release-id { "release-id": { "artifact-id": "Project1", "group-id": "com.redhat", "version": "1.1" } } 21.1. Sending requests with the KIE Server REST API using a REST client or curl utility The KIE Server REST API enables you to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. You can send KIE Server REST API requests using any REST client or curl utility. Prerequisites KIE Server is installed and running. You have kie-server user role access to KIE Server. Procedure Identify the relevant API endpoint to which you want to send a request, such as [GET] /server/containers to retrieve KIE containers from KIE Server. In a REST client or curl utility, enter the following components for a GET request to /server/containers . Adjust any request details according to your use case. For REST client: Authentication : Enter the user name and password of the KIE Server user with the kie-server role. HTTP Headers : Set the following header: Accept : application/json HTTP method : Set to GET . URL : Enter the KIE Server REST API base URL and endpoint, such as http://localhost:8080/kie-server/services/rest/server/containers . For curl utility: -u : Enter the user name and password of the KIE Server user with the kie-server role. -H : Set the following header: Accept : application/json -X : Set to GET . URL : Enter the KIE Server REST API base URL and endpoint, such as http://localhost:8080/kie-server/services/rest/server/containers . Execute the request and review the KIE Server response. Example server response (JSON): { "type": "SUCCESS", "msg": "List of created containers", "result": { "kie-containers": { "kie-container": [ { "container-id": "itorders_1.0.0-SNAPSHOT", "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "resolved-release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "status": "STARTED", "scanner": { "status": "DISPOSED", "poll-interval": null }, "config-items": [], "container-alias": "itorders" } ] } } } For this example, copy or note the project group-id , artifact-id , and version (GAV) data from one of the deployed KIE containers returned in the response. In your REST client or curl utility, send another API request with the following components for a PUT request to /server/containers/{containerId} to deploy a new KIE container with the copied project GAV data. Adjust any request details according to your use case. For REST client: Authentication : Enter the user name and password of the KIE Server user with the kie-server role. HTTP Headers : Set the following headers: Accept : application/json Content-Type : application/json Note When you add fields=not_null to Content-Type , the null fields are excluded from the REST API response. HTTP method : Set to PUT . URL : Enter the KIE Server REST API base URL and endpoint, such as http://localhost:8080/kie-server/services/rest/server/containers/MyContainer . Request body : Add a JSON request body with the configuration items for the new KIE container: { "config-items": [ { "itemName": "RuntimeStrategy", "itemValue": "SINGLETON", "itemType": "java.lang.String" }, { "itemName": "MergeMode", "itemValue": "MERGE_COLLECTIONS", "itemType": "java.lang.String" }, { "itemName": "KBase", "itemValue": "", "itemType": "java.lang.String" }, { "itemName": "KSession", "itemValue": "", "itemType": "java.lang.String" } ], "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "scanner": { "poll-interval": "5000", "status": "STARTED" } } For curl utility: -u : Enter the user name and password of the KIE Server user with the kie-server role. -H : Set the following headers: Accept : application/json Content-Type : application/json Note When you add fields=not_null to Content-Type , the null fields are excluded from the REST API response. -X : Set to PUT . URL : Enter the KIE Server REST API base URL and endpoint, such as http://localhost:8080/kie-server/services/rest/server/containers/MyContainer . -d : Add a JSON request body or file ( @file.json ) with the configuration items for the new KIE container: Execute the request and review the KIE Server response. Example server response (JSON): { "type": "SUCCESS", "msg": "Container MyContainer successfully deployed with module itorders:itorders:1.0.0-SNAPSHOT.", "result": { "kie-container": { "container-id": "MyContainer", "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "resolved-release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "status": "STARTED", "scanner": { "status": "STARTED", "poll-interval": 5000 }, "config-items": [], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1540584717937 }, "content": [ "Container MyContainer successfully created with module itorders:itorders:1.0.0-SNAPSHOT." ] } ], "container-alias": null } } } If you encounter request errors, review the returned error code messages and adjust your request accordingly. 21.2. Sending requests with the KIE Server REST API using the Swagger interface The KIE Server REST API supports a Swagger web interface that you can use instead of a standalone REST client or curl utility to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. Note By default, the Swagger web interface for KIE Server is enabled by the org.kie.swagger.server.ext.disabled=false system property. To disable the Swagger web interface in KIE Server, set this system property to true . Prerequisites KIE Server is installed and running. You have kie-server user role access to KIE Server. Procedure In a web browser, navigate to http://SERVER:PORT/kie-server/docs , such as http://localhost:8080/kie-server/docs , and log in with the user name and password of the KIE Server user with the kie-server role. In the Swagger page, select the relevant API endpoint to which you want to send a request, such as KIE Server and KIE containers [GET] /server/containers to retrieve KIE containers from KIE Server. Click Try it out and provide any optional parameters by which you want to filter results, if needed. In the Response content type drop-down menu, select the desired format of the server response, such as application/json for JSON format. Click Execute and review the KIE Server response. Example server response (JSON): { "type": "SUCCESS", "msg": "List of created containers", "result": { "kie-containers": { "kie-container": [ { "container-id": "itorders_1.0.0-SNAPSHOT", "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "resolved-release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "status": "STARTED", "scanner": { "status": "DISPOSED", "poll-interval": null }, "config-items": [], "container-alias": "itorders" } ] } } } For this example, copy or note the project group-id , artifact-id , and version (GAV) data from one of the deployed KIE containers returned in the response. In the Swagger page, navigate to the KIE Server and KIE containers [PUT] /server/containers/{containerId} endpoint to send another request to deploy a new KIE container with the copied project GAV data. Adjust any request details according to your use case. Click Try it out and enter the following components for the request: containerId : Enter the ID of the new KIE container, such as MyContainer . body : Set the Parameter content type to the desired request body format, such as application/json for JSON format, and add a request body with the configuration items for the new KIE container: { "config-items": [ { "itemName": "RuntimeStrategy", "itemValue": "SINGLETON", "itemType": "java.lang.String" }, { "itemName": "MergeMode", "itemValue": "MERGE_COLLECTIONS", "itemType": "java.lang.String" }, { "itemName": "KBase", "itemValue": "", "itemType": "java.lang.String" }, { "itemName": "KSession", "itemValue": "", "itemType": "java.lang.String" } ], "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "scanner": { "poll-interval": "5000", "status": "STARTED" } } In the Response content type drop-down menu, select the desired format of the server response, such as application/json for JSON format. Click Execute and review the KIE Server response. Example server response (JSON): { "type": "SUCCESS", "msg": "Container MyContainer successfully deployed with module itorders:itorders:1.0.0-SNAPSHOT.", "result": { "kie-container": { "container-id": "MyContainer", "release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "resolved-release-id": { "group-id": "itorders", "artifact-id": "itorders", "version": "1.0.0-SNAPSHOT" }, "status": "STARTED", "scanner": { "status": "STARTED", "poll-interval": 5000 }, "config-items": [], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1540584717937 }, "content": [ "Container MyContainer successfully created with module itorders:itorders:1.0.0-SNAPSHOT." ] } ], "container-alias": null } } } If you encounter request errors, review the returned error code messages and adjust your request accordingly. 21.3. Supported KIE Server REST API endpoints The KIE Server REST API provides endpoints for the following types of resources in Red Hat Decision Manager: KIE Server and KIE containers KIE session assets (for runtime commands) DMN assets Planning solvers The KIE Server REST API base URL is http://SERVER:PORT/kie-server/services/rest/ . All requests require HTTP Basic authentication or token-based authentication for the kie-server user role. For the full list of KIE Server REST API endpoints and descriptions, use one of the following resources: Execution Server REST API on the jBPM Documentation page (static) Swagger UI for the KIE Server REST API at http://SERVER:PORT/kie-server/docs (dynamic, requires running KIE Server) Note By default, the Swagger web interface for KIE Server is enabled by the org.kie.swagger.server.ext.disabled=false system property. To disable the Swagger web interface in KIE Server, set this system property to true . 21.3.1. REST endpoints for specific DMN models Red Hat Decision Manager provides model-specific DMN KIE Server endpoints that you can use to interact with your specific DMN model without using the Business Central user interface. For each DMN model in a container in Red Hat Decision Manager, the following KIE Server REST endpoints are automatically generated based on the content of the DMN model: POST /server/containers/{containerId}/dmn/models/{modelname} : A business-domain endpoint for evaluating a specified DMN model in a container POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName} : A business-domain endpoint for evaluating a specified decision service component in a specific DMN model available in a container POST /server/containers/{containerId}/dmn/models/{modelname}/dmnresult : An endpoint for evaluating a specified DMN model containing customized body payload and returning a DMNResult response, including business-domain context, helper messages, and helper decision pointers POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName}/dmnresult : An endpoint for evaluating a specified decision service component in a specific DMN model and returning a DMNResult response, including the business-domain context, helper messages, and help decision pointers for the decision service GET /server/containers/{containerId}/dmn/models/{modelname} : An endpoint for returning standard DMN XML without decision logic and containing the inputs and decisions of the specified DMN model GET /server/containers/{containerId}/dmn/openapi.json (|.yaml) : An endpoint for retrieving Swagger or OAS for the DMN models in a specified container You can use these endpoints to interact with a DMN model or a specific decision service within a model. As you decide between using business-domain and dmnresult variants of these REST endpoints, review the following considerations: REST business-domain endpoints : Use this endpoint type if a client application is only concerned with a positive evaluation outcome, is not interested in parsing Info or Warn messages, and only needs an HTTP 5xx response for any errors. This type of endpoint is also helpful for application-like clients, due to singleton coercion of decision service results that resemble the DMN modeling behavior. REST dmnresult endpoints : Use this endpoint type if a client needs to parse Info , Warn , or Error messages in all cases. For each endpoint, use a REST client or curl utility to send requests with the following components: Base URL : http:// HOST : PORT /kie-server/services/rest/ Path parameters : {containerId} : The string identifier of the container, such as mykjar-project {modelName} : The string identifier of the DMN model, such as Traffic Violation {decisionServiceName} : The string identifier of the decision service component in the DMN DRG, such as TrafficViolationDecisionService dmnresult : The string identifier that enables the endpoint to return a full DMNResult response with more detailed Info , Warn , and Error messaging HTTP headers : For POST requests only: accept : application/json content-type : application/json HTTP methods : GET or POST The examples in the following endpoints are based on a mykjar-project container that contains a Traffic Violation DMN model, containing a TrafficViolationDecisionService decision service component. For all of these endpoints, if a DMN evaluation Error message occurs, a DMNResult response is returned along with an HTTP 5xx error. If a DMN Info or Warn message occurs, the relevant response is returned along with the business-domain REST body, in the X-Kogito-decision-messages extended HTTP header, to be used for client-side business logic. When there is a requirement of more refined client-side business logic, the client can use the dmnresult variant of the endpoints. Retrieve Swagger or OAS for DMN models in a specified container GET /server/containers/{containerId}/dmn/openapi.json (|.yaml) Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/openapi.json (|.yaml) Return the DMN XML without decision logic GET /server/containers/{containerId}/dmn/models/{modelname} Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation Example curl request Example response (XML) <?xml version='1.0' encoding='UTF-8'?> <dmn:definitions xmlns:dmn="http://www.omg.org/spec/DMN/20180521/MODEL/" xmlns="https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF" xmlns:di="http://www.omg.org/spec/DMN/20180521/DI/" xmlns:kie="http://www.drools.org/kie/dmn/1.2" xmlns:feel="http://www.omg.org/spec/DMN/20180521/FEEL/" xmlns:dmndi="http://www.omg.org/spec/DMN/20180521/DMNDI/" xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/" id="_1C792953-80DB-4B32-99EB-25FBE32BAF9E" name="Traffic Violation" expressionLanguage="http://www.omg.org/spec/DMN/20180521/FEEL/" typeLanguage="http://www.omg.org/spec/DMN/20180521/FEEL/" namespace="https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF"> <dmn:extensionElements/> <dmn:itemDefinition id="_63824D3F-9173-446D-A940-6A7F0FA056BB" name="tDriver" isCollection="false"> <dmn:itemComponent id="_9DAB5DAA-3B44-4F6D-87F2-95125FB2FEE4" name="Name" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_856BA8FA-EF7B-4DF9-A1EE-E28263CE9955" name="Age" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_FDC2CE03-D465-47C2-A311-98944E8CC23F" name="State" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_D6FD34C4-00DC-4C79-B1BF-BBCF6FC9B6D7" name="City" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_7110FE7E-1A38-4C39-B0EB-AEEF06BA37F4" name="Points" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_40731093-0642-4588-9183-1660FC55053B" name="tViolation" isCollection="false"> <dmn:itemComponent id="_39E88D9F-AE53-47AD-B3DE-8AB38D4F50B3" name="Code" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_1648EA0A-2463-4B54-A12A-D743A3E3EE7B" name="Date" isCollection="false"> <dmn:typeRef>date</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_9F129EAA-4E71-4D99-B6D0-84EEC3AC43CC" name="Type" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> <dmn:allowedValues kie:constraintType="enumeration" id="_626A8F9C-9DD1-44E0-9568-0F6F8F8BA228"> <dmn:text>"speed", "parking", "driving under the influence"</dmn:text> </dmn:allowedValues> </dmn:itemComponent> <dmn:itemComponent id="_DDD10D6E-BD38-4C79-9E2F-8155E3A4B438" name="Speed Limit" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_229F80E4-2892-494C-B70D-683ABF2345F6" name="Actual Speed" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_2D4F30EE-21A6-4A78-A524-A5C238D433AE" name="tFine" isCollection="false"> <dmn:itemComponent id="_B9F70BC7-1995-4F51-B949-1AB65538B405" name="Amount" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_F49085D6-8F08-4463-9A1A-EF6B57635DBD" name="Points" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id="_1929CBD5-40E0-442D-B909-49CEDE0101DC" name="Violation"> <dmn:variable id="_C16CF9B1-5FAB-48A0-95E0-5FCD661E0406" name="Violation" typeRef="tViolation"/> </dmn:inputData> <dmn:decision id="_4055D956-1C47-479C-B3F4-BAEB61F1C929" name="Fine"> <dmn:variable id="_8C1EAC83-F251-4D94-8A9E-B03ACF6849CD" name="Fine" typeRef="tFine"/> <dmn:informationRequirement id="_800A3BBB-90A3-4D9D-BA5E-A311DED0134F"> <dmn:requiredInput href="#_1929CBD5-40E0-442D-B909-49CEDE0101DC"/> </dmn:informationRequirement> </dmn:decision> <dmn:inputData id="_1F9350D7-146D-46F1-85D8-15B5B68AF22A" name="Driver"> <dmn:variable id="_A80F16DF-0DB4-43A2-B041-32900B1A3F3D" name="Driver" typeRef="tDriver"/> </dmn:inputData> <dmn:decision id="_8A408366-D8E9-4626-ABF3-5F69AA01F880" name="Should the driver be suspended?"> <dmn:question>Should the driver be suspended due to points on his license?</dmn:question> <dmn:allowedAnswers>"Yes", "No"</dmn:allowedAnswers> <dmn:variable id="_40387B66-5D00-48C8-BB90-E83EE3332C72" name="Should the driver be suspended?" typeRef="string"/> <dmn:informationRequirement id="_982211B1-5246-49CD-BE85-3211F71253CF"> <dmn:requiredInput href="#_1F9350D7-146D-46F1-85D8-15B5B68AF22A"/> </dmn:informationRequirement> <dmn:informationRequirement id="_AEC4AA5F-50C3-4FED-A0C2-261F90290731"> <dmn:requiredDecision href="#_4055D956-1C47-479C-B3F4-BAEB61F1C929"/> </dmn:informationRequirement> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension/> <dmndi:DMNShape id="dmnshape-_1929CBD5-40E0-442D-B909-49CEDE0101DC" dmnElementRef="_1929CBD5-40E0-442D-B909-49CEDE0101DC" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="708" y="350" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_4055D956-1C47-479C-B3F4-BAEB61F1C929" dmnElementRef="_4055D956-1C47-479C-B3F4-BAEB61F1C929" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="709" y="210" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_1F9350D7-146D-46F1-85D8-15B5B68AF22A" dmnElementRef="_1F9350D7-146D-46F1-85D8-15B5B68AF22A" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="369" y="344" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_8A408366-D8E9-4626-ABF3-5F69AA01F880" dmnElementRef="_8A408366-D8E9-4626-ABF3-5F69AA01F880" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="534" y="83" width="133" height="63"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id="dmnedge-_800A3BBB-90A3-4D9D-BA5E-A311DED0134F" dmnElementRef="_800A3BBB-90A3-4D9D-BA5E-A311DED0134F"> <di:waypoint x="758" y="375"/> <di:waypoint x="759" y="235"/> </dmndi:DMNEdge> <dmndi:DMNEdge id="dmnedge-_982211B1-5246-49CD-BE85-3211F71253CF" dmnElementRef="_982211B1-5246-49CD-BE85-3211F71253CF"> <di:waypoint x="419" y="369"/> <di:waypoint x="600.5" y="114.5"/> </dmndi:DMNEdge> <dmndi:DMNEdge id="dmnedge-_AEC4AA5F-50C3-4FED-A0C2-261F90290731" dmnElementRef="_AEC4AA5F-50C3-4FED-A0C2-261F90290731"> <di:waypoint x="759" y="235"/> <di:waypoint x="600.5" y="114.5"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI> Evaluate a specified DMN model in a specified container POST /server/containers/{containerId}/dmn/models/{modelname} Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation Example curl request Example POST request body with input data { "Driver": { "Points": 15 }, "Violation": { "Date": "2021-04-08", "Type": "speed", "Actual Speed": 135, "Speed Limit": 100 } } Example response (JSON) { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 135, "Code": null, "Date": "2021-04-08" }, "Driver": { "Points": 15, "State": null, "City": null, "Age": null, "Name": null }, "Fine": { "Points": 7, "Amount": 1000 }, "Should the driver be suspended?": "Yes" } Evaluate a specified decision service within a specified DMN model in a container POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName} For this endpoint, the request body must contain all the requirements of the decision service. The response is the resulting DMN context of the decision service, including the decision values, the original input values, and all other parametric DRG components in serialized form. For example, a business knowledge model is available in string-serialized form in its signature. If the decision service is composed of a single-output decision, the response is the resulting value of that specific decision. This behavior provides an equivalent value at the API level of a specification feature when invoking the decision service in the model itself. As a result, you can, for example, interact with a DMN decision service from web applications. Figure 21.1. Example TrafficViolationDecisionService decision service with single-output decision Figure 21.2. Example TrafficViolationDecisionService decision service with multiple-output decision Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response for single-output decision (JSON) "No" Example response for multiple-output decision (JSON) { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120 }, "Driver": { "Points": 2 }, "Fine": { "Points": 3, "Amount": 500 }, "Should the driver be suspended?": "No" } Evaluate a specified DMN model in a specified container and return a DMNResult response POST /server/containers/{containerId}/dmn/models/{modelname}/dmnresult Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/dmnresult Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response (JSON) { "namespace": "https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF", "modelName": "Traffic Violation", "dmnContext": { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120, "Code": null, "Date": null }, "Driver": { "Points": 2, "State": null, "City": null, "Age": null, "Name": null }, "Fine": { "Points": 3, "Amount": 500 }, "Should the driver be suspended?": "No" }, "messages": [], "decisionResults": [ { "decisionId": "_4055D956-1C47-479C-B3F4-BAEB61F1C929", "decisionName": "Fine", "result": { "Points": 3, "Amount": 500 }, "messages": [], "evaluationStatus": "SUCCEEDED" }, { "decisionId": "_8A408366-D8E9-4626-ABF3-5F69AA01F880", "decisionName": "Should the driver be suspended?", "result": "No", "messages": [], "evaluationStatus": "SUCCEEDED" } ] } Evaluate a specified decision service within a DMN model in a specified container and return a DMNResult response POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName}/dmnresult Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService/dmnresult Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response (JSON) { "namespace": "https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF", "modelName": "Traffic Violation", "dmnContext": { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120, "Code": null, "Date": null }, "Driver": { "Points": 2, "State": null, "City": null, "Age": null, "Name": null }, "Should the driver be suspended?": "No" }, "messages": [], "decisionResults": [ { "decisionId": "_8A408366-D8E9-4626-ABF3-5F69AA01F880", "decisionName": "Should the driver be suspended?", "result": "No", "messages": [], "evaluationStatus": "SUCCEEDED" } ] } | [
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])\"",
"{ \"release-id\": { \"artifact-id\": \"Project1\", \"group-id\": \"com.redhat\", \"version\": \"1.1\" } }",
"curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -X GET \"http://localhost:8080/kie-server/services/rest/server/containers\"",
"{ \"type\": \"SUCCESS\", \"msg\": \"List of created containers\", \"result\": { \"kie-containers\": { \"kie-container\": [ { \"container-id\": \"itorders_1.0.0-SNAPSHOT\", \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"DISPOSED\", \"poll-interval\": null }, \"config-items\": [], \"container-alias\": \"itorders\" } ] } } }",
"{ \"config-items\": [ { \"itemName\": \"RuntimeStrategy\", \"itemValue\": \"SINGLETON\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"MergeMode\", \"itemValue\": \"MERGE_COLLECTIONS\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KBase\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KSession\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" } ], \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"scanner\": { \"poll-interval\": \"5000\", \"status\": \"STARTED\" } }",
"curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -H \"Content-Type: application/json\" -X PUT \"http://localhost:8080/kie-server/services/rest/server/containers/MyContainer\" -d \"{ \\\"config-items\\\": [ { \\\"itemName\\\": \\\"RuntimeStrategy\\\", \\\"itemValue\\\": \\\"SINGLETON\\\", \\\"itemType\\\": \\\"java.lang.String\\\" }, { \\\"itemName\\\": \\\"MergeMode\\\", \\\"itemValue\\\": \\\"MERGE_COLLECTIONS\\\", \\\"itemType\\\": \\\"java.lang.String\\\" }, { \\\"itemName\\\": \\\"KBase\\\", \\\"itemValue\\\": \\\"\\\", \\\"itemType\\\": \\\"java.lang.String\\\" }, { \\\"itemName\\\": \\\"KSession\\\", \\\"itemValue\\\": \\\"\\\", \\\"itemType\\\": \\\"java.lang.String\\\" } ], \\\"release-id\\\": { \\\"group-id\\\": \\\"itorders\\\", \\\"artifact-id\\\": \\\"itorders\\\", \\\"version\\\": \\\"1.0.0-SNAPSHOT\\\" }, \\\"scanner\\\": { \\\"poll-interval\\\": \\\"5000\\\", \\\"status\\\": \\\"STARTED\\\" }}\"",
"curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -H \"Content-Type: application/json\" -X PUT \"http://localhost:8080/kie-server/services/rest/server/containers/MyContainer\" -d @my-container-configs.json",
"{ \"type\": \"SUCCESS\", \"msg\": \"Container MyContainer successfully deployed with module itorders:itorders:1.0.0-SNAPSHOT.\", \"result\": { \"kie-container\": { \"container-id\": \"MyContainer\", \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"STARTED\", \"poll-interval\": 5000 }, \"config-items\": [], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1540584717937 }, \"content\": [ \"Container MyContainer successfully created with module itorders:itorders:1.0.0-SNAPSHOT.\" ] } ], \"container-alias\": null } } }",
"{ \"type\": \"SUCCESS\", \"msg\": \"List of created containers\", \"result\": { \"kie-containers\": { \"kie-container\": [ { \"container-id\": \"itorders_1.0.0-SNAPSHOT\", \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"DISPOSED\", \"poll-interval\": null }, \"config-items\": [], \"container-alias\": \"itorders\" } ] } } }",
"{ \"config-items\": [ { \"itemName\": \"RuntimeStrategy\", \"itemValue\": \"SINGLETON\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"MergeMode\", \"itemValue\": \"MERGE_COLLECTIONS\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KBase\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" }, { \"itemName\": \"KSession\", \"itemValue\": \"\", \"itemType\": \"java.lang.String\" } ], \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"scanner\": { \"poll-interval\": \"5000\", \"status\": \"STARTED\" } }",
"{ \"type\": \"SUCCESS\", \"msg\": \"Container MyContainer successfully deployed with module itorders:itorders:1.0.0-SNAPSHOT.\", \"result\": { \"kie-container\": { \"container-id\": \"MyContainer\", \"release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\": { \"group-id\": \"itorders\", \"artifact-id\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\" }, \"status\": \"STARTED\", \"scanner\": { \"status\": \"STARTED\", \"poll-interval\": 5000 }, \"config-items\": [], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1540584717937 }, \"content\": [ \"Container MyContainer successfully created with module itorders:itorders:1.0.0-SNAPSHOT.\" ] } ], \"container-alias\": null } } }",
"curl -u wbadmin:wbadmin -X GET \"http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic%20Violation\" -H \"accept: application/xml\"",
"<?xml version='1.0' encoding='UTF-8'?> <dmn:definitions xmlns:dmn=\"http://www.omg.org/spec/DMN/20180521/MODEL/\" xmlns=\"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\" xmlns:di=\"http://www.omg.org/spec/DMN/20180521/DI/\" xmlns:kie=\"http://www.drools.org/kie/dmn/1.2\" xmlns:feel=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" xmlns:dmndi=\"http://www.omg.org/spec/DMN/20180521/DMNDI/\" xmlns:dc=\"http://www.omg.org/spec/DMN/20180521/DC/\" id=\"_1C792953-80DB-4B32-99EB-25FBE32BAF9E\" name=\"Traffic Violation\" expressionLanguage=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" typeLanguage=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" namespace=\"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\"> <dmn:extensionElements/> <dmn:itemDefinition id=\"_63824D3F-9173-446D-A940-6A7F0FA056BB\" name=\"tDriver\" isCollection=\"false\"> <dmn:itemComponent id=\"_9DAB5DAA-3B44-4F6D-87F2-95125FB2FEE4\" name=\"Name\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_856BA8FA-EF7B-4DF9-A1EE-E28263CE9955\" name=\"Age\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_FDC2CE03-D465-47C2-A311-98944E8CC23F\" name=\"State\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_D6FD34C4-00DC-4C79-B1BF-BBCF6FC9B6D7\" name=\"City\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_7110FE7E-1A38-4C39-B0EB-AEEF06BA37F4\" name=\"Points\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_40731093-0642-4588-9183-1660FC55053B\" name=\"tViolation\" isCollection=\"false\"> <dmn:itemComponent id=\"_39E88D9F-AE53-47AD-B3DE-8AB38D4F50B3\" name=\"Code\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_1648EA0A-2463-4B54-A12A-D743A3E3EE7B\" name=\"Date\" isCollection=\"false\"> <dmn:typeRef>date</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_9F129EAA-4E71-4D99-B6D0-84EEC3AC43CC\" name=\"Type\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> <dmn:allowedValues kie:constraintType=\"enumeration\" id=\"_626A8F9C-9DD1-44E0-9568-0F6F8F8BA228\"> <dmn:text>\"speed\", \"parking\", \"driving under the influence\"</dmn:text> </dmn:allowedValues> </dmn:itemComponent> <dmn:itemComponent id=\"_DDD10D6E-BD38-4C79-9E2F-8155E3A4B438\" name=\"Speed Limit\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_229F80E4-2892-494C-B70D-683ABF2345F6\" name=\"Actual Speed\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_2D4F30EE-21A6-4A78-A524-A5C238D433AE\" name=\"tFine\" isCollection=\"false\"> <dmn:itemComponent id=\"_B9F70BC7-1995-4F51-B949-1AB65538B405\" name=\"Amount\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_F49085D6-8F08-4463-9A1A-EF6B57635DBD\" name=\"Points\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id=\"_1929CBD5-40E0-442D-B909-49CEDE0101DC\" name=\"Violation\"> <dmn:variable id=\"_C16CF9B1-5FAB-48A0-95E0-5FCD661E0406\" name=\"Violation\" typeRef=\"tViolation\"/> </dmn:inputData> <dmn:decision id=\"_4055D956-1C47-479C-B3F4-BAEB61F1C929\" name=\"Fine\"> <dmn:variable id=\"_8C1EAC83-F251-4D94-8A9E-B03ACF6849CD\" name=\"Fine\" typeRef=\"tFine\"/> <dmn:informationRequirement id=\"_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\"> <dmn:requiredInput href=\"#_1929CBD5-40E0-442D-B909-49CEDE0101DC\"/> </dmn:informationRequirement> </dmn:decision> <dmn:inputData id=\"_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" name=\"Driver\"> <dmn:variable id=\"_A80F16DF-0DB4-43A2-B041-32900B1A3F3D\" name=\"Driver\" typeRef=\"tDriver\"/> </dmn:inputData> <dmn:decision id=\"_8A408366-D8E9-4626-ABF3-5F69AA01F880\" name=\"Should the driver be suspended?\"> <dmn:question>Should the driver be suspended due to points on his license?</dmn:question> <dmn:allowedAnswers>\"Yes\", \"No\"</dmn:allowedAnswers> <dmn:variable id=\"_40387B66-5D00-48C8-BB90-E83EE3332C72\" name=\"Should the driver be suspended?\" typeRef=\"string\"/> <dmn:informationRequirement id=\"_982211B1-5246-49CD-BE85-3211F71253CF\"> <dmn:requiredInput href=\"#_1F9350D7-146D-46F1-85D8-15B5B68AF22A\"/> </dmn:informationRequirement> <dmn:informationRequirement id=\"_AEC4AA5F-50C3-4FED-A0C2-261F90290731\"> <dmn:requiredDecision href=\"#_4055D956-1C47-479C-B3F4-BAEB61F1C929\"/> </dmn:informationRequirement> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension/> <dmndi:DMNShape id=\"dmnshape-_1929CBD5-40E0-442D-B909-49CEDE0101DC\" dmnElementRef=\"_1929CBD5-40E0-442D-B909-49CEDE0101DC\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"708\" y=\"350\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_4055D956-1C47-479C-B3F4-BAEB61F1C929\" dmnElementRef=\"_4055D956-1C47-479C-B3F4-BAEB61F1C929\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"709\" y=\"210\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" dmnElementRef=\"_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"369\" y=\"344\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_8A408366-D8E9-4626-ABF3-5F69AA01F880\" dmnElementRef=\"_8A408366-D8E9-4626-ABF3-5F69AA01F880\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"534\" y=\"83\" width=\"133\" height=\"63\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id=\"dmnedge-_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\" dmnElementRef=\"_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\"> <di:waypoint x=\"758\" y=\"375\"/> <di:waypoint x=\"759\" y=\"235\"/> </dmndi:DMNEdge> <dmndi:DMNEdge id=\"dmnedge-_982211B1-5246-49CD-BE85-3211F71253CF\" dmnElementRef=\"_982211B1-5246-49CD-BE85-3211F71253CF\"> <di:waypoint x=\"419\" y=\"369\"/> <di:waypoint x=\"600.5\" y=\"114.5\"/> </dmndi:DMNEdge> <dmndi:DMNEdge id=\"dmnedge-_AEC4AA5F-50C3-4FED-A0C2-261F90290731\" dmnElementRef=\"_AEC4AA5F-50C3-4FED-A0C2-261F90290731\"> <di:waypoint x=\"759\" y=\"235\"/> <di:waypoint x=\"600.5\" y=\"114.5\"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI>",
"curl -u wbadmin:wbadmin-X POST \"http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d \"{\\\"Driver\\\":{\\\"Points\\\":15},\\\"Violation\\\":{\\\"Date\\\":\\\"2021-04-08\\\",\\\"Type\\\":\\\"speed\\\",\\\"Actual Speed\\\":135,\\\"Speed Limit\\\":100}}\"",
"{ \"Driver\": { \"Points\": 15 }, \"Violation\": { \"Date\": \"2021-04-08\", \"Type\": \"speed\", \"Actual Speed\": 135, \"Speed Limit\": 100 } }",
"{ \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 135, \"Code\": null, \"Date\": \"2021-04-08\" }, \"Driver\": { \"Points\": 15, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Fine\": { \"Points\": 7, \"Amount\": 1000 }, \"Should the driver be suspended?\": \"Yes\" }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"\"No\"",
"{ \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120 }, \"Driver\": { \"Points\": 2 }, \"Fine\": { \"Points\": 3, \"Amount\": 500 }, \"Should the driver be suspended?\": \"No\" }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/dmnresult -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"{ \"namespace\": \"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\", \"modelName\": \"Traffic Violation\", \"dmnContext\": { \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120, \"Code\": null, \"Date\": null }, \"Driver\": { \"Points\": 2, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Fine\": { \"Points\": 3, \"Amount\": 500 }, \"Should the driver be suspended?\": \"No\" }, \"messages\": [], \"decisionResults\": [ { \"decisionId\": \"_4055D956-1C47-479C-B3F4-BAEB61F1C929\", \"decisionName\": \"Fine\", \"result\": { \"Points\": 3, \"Amount\": 500 }, \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" }, { \"decisionId\": \"_8A408366-D8E9-4626-ABF3-5F69AA01F880\", \"decisionName\": \"Should the driver be suspended?\", \"result\": \"No\", \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" } ] }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService/dmnresult -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"{ \"namespace\": \"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\", \"modelName\": \"Traffic Violation\", \"dmnContext\": { \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120, \"Code\": null, \"Date\": null }, \"Driver\": { \"Points\": 2, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Should the driver be suspended?\": \"No\" }, \"messages\": [], \"decisionResults\": [ { \"decisionId\": \"_8A408366-D8E9-4626-ABF3-5F69AA01F880\", \"decisionName\": \"Should the driver be suspended?\", \"result\": \"No\", \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/kie-server-rest-api-con_kie-apis |
Part I. Using the ipa-server Container (TECHNOLOGY PREVIEW) | Part I. Using the ipa-server Container (TECHNOLOGY PREVIEW) This part covers how to deploy an Identity Management server and replica in a container, how to migrate a server from a container to a host system, and finally, how to uninstall server and replica containers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/using_containerized_identity_management_services/using-the-ipa-server-container |
Updating clusters | Updating clusters OpenShift Container Platform 4.18 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/updating_clusters/index |
Using the AMQ Spring Boot Starter | Using the AMQ Spring Boot Starter Red Hat AMQ 2021.Q3 For Use with AMQ Clients 2.10 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_spring_boot_starter/index |
function::set_kernel_long | function::set_kernel_long Name function::set_kernel_long - Writes a long value to kernel memory Synopsis Arguments addr The kernel address to write the long to val The long which is to be written Description Writes the long value to a given kernel memory address. Reports an error when writing to the given address fails. Requires the use of guru mode (-g). | [
"set_kernel_long(addr:long,val:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-set-kernel-long |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/feedback_automating-sap-hana-scale-out-v9 |
Chapter 2. Clair concepts | Chapter 2. Clair concepts The following sections provide a conceptual overview of how Clair works. 2.1. Clair in practice A Clair analysis is broken down into three distinct parts: indexing, matching, and notification. 2.1.1. Indexing Clair's indexer service plays a crucial role in understanding the makeup of a container image. In Clair, container image representations called "manifests." Manifests are used to comprehend the contents of the image's layers. To streamline this process, Clair takes advantage of the fact that Open Container Initiative (OCI) manifests and layers are designed for content addressing, reducing repetitive tasks. During indexing, a manifest that represents a container image is taken and broken down into its essential components. The indexer's job is to uncover the image's contained packages, its origin distribution, and the package repositories it relies on. This valuable information is then recorded and stored within Clair's database. The insights gathered during indexing serve as the basis for generating a comprehensive vulnerability report. This report can be seamlessly transferred to a matcher node for further analysis and action, helping users make informed decisions about their container images' security. The IndexReport is stored in Clair's database. It can be fed to a matcher node to compute the vulnerability report. 2.1.2. Matching With Clair, a matcher node is responsible for matching vulnerabilities to a provided index report. Matchers are responsible for keeping the database of vulnerabilities up to date. Matchers run a set of updaters, which periodically probe their data sources for new content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to always provide the most recent vulnerability report when queried. The vulnerability report summarizes both a manifest's content and any vulnerabilities affecting the content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to be used often. It is designed to always provide the most recent VulnerabilityReport when queried. The VulnerabilityReport summarizes both a manifest's content and any vulnerabilities affecting the content. 2.1.3. Notifier service Clair uses a notifier service that keeps track of new security database updates and informs users if new or removed vulnerabilities affect an indexed manifest. When the notifier becomes aware of new vulnerabilities affecting a previously indexed manifest, it uses the configured methods in your config.yaml file to issue notifications about the new changes. Returned notifications express the most severe vulnerability discovered because of the change. This avoids creating excessive notifications for the same security database update. When a user receives a notification, it issues a new request against the matcher to receive an up to date vulnerability report. You can subscribe to notifications through the following mechanics: Webhook delivery AMQP delivery STOMP delivery Configuring the notifier is done through the Clair YAML configuration file. 2.2. Clair authentication In its current iteration, Clair v4 (Clair) handles authentication internally. Note versions of Clair used JWT Proxy to gate authentication. Authentication is configured by specifying configuration objects underneath the auth key of the configuration. Multiple authentication configurations might be present, but they are used preferentially in the following order: PSK. With this authentication configuration, Clair implements JWT-based authentication using a pre-shared key. Configuration. For example: auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer' In this configuration the auth field requires two parameters: iss , which is the issuer to validate all incoming requests, and key , which is a base64 coded symmetric key for validating the requests. 2.3. Clair updaters Clair uses Go packages called updaters that contain the logic of fetching and parsing different vulnerability databases. Updaters are usually paired with a matcher to interpret if, and how, any vulnerability is related to a package. Administrators might want to update the vulnerability database less frequently, or not import vulnerabilities from databases that they know will not be used. 2.4. Information about Clair updaters The following table provides details about each Clair updater, including the configuration parameter, a brief description, relevant URLs, and the associated components that they interact with. This list is not exhaustive, and some servers might issue redirects, while certain request URLs are dynamically constructed to ensure accurate vulnerability data retrieval. For Clair, each updater is responsible for fetching and parsing vulnerability data related to a specific package type or distribution. For example, the Debian updater focuses on Debian-based Linux distributions, while the AWS updater focuses on vulnerabilities specific to Amazon Web Services' Linux distributions. Understanding the package type is important for vulnerability management because different package types might have unique security concerns and require specific updates and patches. Note If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. Use the following table to add updater URLs to your proxy allowlist. Table 2.1. Clair updater information Updater Description URLs Component alpine The Alpine updater is responsible for fetching and parsing vulnerability data related to packages in Alpine Linux distributions. https://secdb.alpinelinux.org/ Alpine Linux SecDB database aws The AWS updater is focused on AWS Linux-based packages, ensuring that vulnerability information specific to Amazon Web Services' custom Linux distributions is kept up-to-date. http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list https://cdn.amazonlinux.com/al2023/core/mirrors/latest/x86_64/mirror.list Amazon Web Services (AWS) UpdateInfo debian The Debian updater is essential for tracking vulnerabilities in packages associated with Debian-based Linux distributions. https://deb.debian.org/ https://security-tracker.debian.org/tracker/data/json Debian Security Tracker clair.cvss The Clair Common Vulnerability Scoring System (CVSS) updater focuses on maintaining data about vulnerabilities and their associated CVSS scores. This is not tied to a specific package type but rather to the severity and risk assessment of vulnerabilities in general. https://nvd.nist.gov/feeds/json/cve/1.1/ National Vulnerability Database (NVD) feed for Common Vulnerabilities and Exposures (CVE) data in JSON format oracle The Oracle updater is dedicated to Oracle Linux packages, maintaining data on vulnerabilities that affect Oracle Linux systems. https://linux.oracle.com/security/oval/com.oracle.elsa-*.xml.bz2 Oracle Oval database photon The Photon updater deals with packages in VMware Photon OS. https://packages.vmware.com/photon/photon_oval_definitions/ VMware Photon OS oval definitions rhel The Red Hat Enterprise Linux (RHEL) updater is responsible for maintaining vulnerability data for packages in Red Hat's Enterprise Linux distribution. https://access.redhat.com/security/cve/ https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST Red Hat Enterprise Linux (RHEL) Oval database rhcc The Red Hat Container Catalog (RHCC) updater is connected to Red Hat's container images. This updater ensures that vulnerability information related to Red Hat's containerized software is kept current. https://access.redhat.com/security/data/metrics/cvemap.xml Resource Handler Configuration Controller (RHCC) database suse The SUSE updater manages vulnerability information for packages in the SUSE Linux distribution family, including openSUSE, SUSE Enterprise Linux, and others. https://support.novell.com/security/oval/ SUSE Oval database ubuntu The Ubuntu updater is dedicated to tracking vulnerabilities in packages associated with Ubuntu-based Linux distributions. Ubuntu is a popular distribution in the Linux ecosystem. https://security-metadata.canonical.com/oval/com.ubuntu.*.cve.oval.xml https://api.launchpad.net/1.0/ Ubuntu Oval Database osv The Open Source Vulnerability (OSV) updater specializes in tracking vulnerabilities within open source software components. OSV is a critical resource that provides detailed information about security issues found in various open source projects. https://osv-vulnerabilities.storage.googleapis.com/ Open Source Vulnerabilities database 2.5. Configuring updaters Updaters can be configured by the updaters.sets key in your clair-config.yaml file. Important If the sets field is not populated, it defaults to using all sets. In using all sets, Clair tries to reach the URL or URLs of each updater. If you are using a proxy environment, you must add these URLs to your proxy allowlist. If updaters are being run automatically within the matcher process, which is the default setting, the period for running updaters is configured under the matcher's configuration field. 2.5.1. Selecting specific updater sets Use the following references to select one, or multiple, updaters for your Red Hat Quay deployment. Configuring Clair for multiple updaters Multiple specific updaters #... updaters: sets: - alpine - aws - osv #... Configuring Clair for Alpine Alpine config.yaml example #... updaters: sets: - alpine #... Configuring Clair for AWS AWS config.yaml example #... updaters: sets: - aws #... Configuring Clair for Debian Debian config.yaml example #... updaters: sets: - debian #... Configuring Clair for Clair CVSS Clair CVSS config.yaml example #... updaters: sets: - clair.cvss #... Configuring Clair for Oracle Oracle config.yaml example #... updaters: sets: - oracle #... Configuring Clair for Photon Photon config.yaml example #... updaters: sets: - photon #... Configuring Clair for SUSE SUSE config.yaml example #... updaters: sets: - suse #... Configuring Clair for Ubuntu Ubuntu config.yaml example #... updaters: sets: - ubuntu #... Configuring Clair for OSV OSV config.yaml example #... updaters: sets: - osv #... 2.5.2. Selecting updater sets for full Red Hat Enterprise Linux (RHEL) coverage For full coverage of vulnerabilities in Red Hat Enterprise Linux (RHEL), you must use the following updater sets: rhel . This updater ensures that you have the latest information on the vulnerabilities that affect RHEL. rhcc . This updater keeps track of vulnerabilities related to Red hat's container images. clair.cvss . This updater offers a comprehensive view of the severity and risk assessment of vulnerabilities by providing Common Vulnerabilities and Exposures (CVE) scores. osv . This updater focuses on tracking vulnerabilities in open-source software components. This updater is recommended due to how common the use of Java and Go are in RHEL products. RHEL updaters example #... updaters: sets: - rhel - rhcc - clair.cvss - osv #... 2.5.3. Advanced updater configuration In some cases, users might want to configure updaters for specific behavior, for example, if you want to allowlist specific ecosystems for the Open Source Vulnerabilities (OSV) updaters. Advanced updater configuration might be useful for proxy deployments or air gapped deployments. Configuration for specific updaters in these scenarios can be passed by putting a key underneath the config environment variable of the updaters object. Users should examine their Clair logs to double-check names. The following YAML snippets detail the various settings available to some Clair updater Important For more users, advanced updater configuration is unnecessary. Configuring the alpine updater #... updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #... Configuring the debian updater #... updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #... Configuring the clair.cvss updater #... updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #... Configuring the oracle updater #... updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #... Configuring the photon updater #... updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #... Configuring the rhel updater #... updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #... 1 Boolean. Whether to include information about vulnerabilities that do not have corresponding patches or updates available. Configuring the rhcc updater #... updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #... Configuring the suse updater #... updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #... Configuring the ubuntu updater #... updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #... 1 Used to force the inclusion of specific distribution and version details in the resulting UpdaterSet, regardless of their status in the API response. Useful when you want to ensure that particular distributions and versions are consistently included in your updater configuration. 2 Specifies the distribution name that you want to force to be included in the UpdaterSet. 3 Specifies the version of the distribution you want to force into the UpdaterSet. Configuring the osv updater #... updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #... 1 The list of ecosystems to allow. When left unset, all ecosystems are allowed. Must be lowercase. For a list of supported ecosystems, see the documentation for defined ecosystems . 2.5.4. Disabling the Clair Updater component In some scenarios, users might want to disable the Clair updater component. Disabling updaters is required when running Red Hat Quay in a disconnected environment. In the following example, Clair updaters are disabled: #... matcher: disable_updaters: true #... 2.6. CVE ratings from the National Vulnerability Database As of Clair v4.2, Common Vulnerability Scoring System (CVSS) enrichment data is now viewable in the Red Hat Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities. With this change, if the vulnerability has a CVSS score that is within 2 levels of the distribution score, the Red Hat Quay UI present's the distribution's score by default. For example: This differs from the interface, which would only display the following information: 2.7. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 2.7.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are deploying Red Hat Quay on OpenShift Container Platform, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. If you are using the Red Hat Quay on OpenShift Container Platform on an IBM Power or IBM Z cluster: OpenShift Container Platform version 4.14 or later is required Red Hat Quay version 3.10 or later is required You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions. | [
"auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'",
"# updaters: sets: - alpine - aws - osv #",
"# updaters: sets: - alpine #",
"# updaters: sets: - aws #",
"# updaters: sets: - debian #",
"# updaters: sets: - clair.cvss #",
"# updaters: sets: - oracle #",
"# updaters: sets: - photon #",
"# updaters: sets: - suse #",
"# updaters: sets: - ubuntu #",
"# updaters: sets: - osv #",
"# updaters: sets: - rhel - rhcc - clair.cvss - osv #",
"# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #",
"# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #",
"# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #",
"# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #",
"# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #",
"# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #",
"# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #",
"# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #",
"# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #",
"# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #",
"# matcher: disable_updaters: true #",
"--- FEATURE_FIPS = true ---"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-concepts |
Chapter 5. ClusterOperator [config.openshift.io/v1] | Chapter 5. ClusterOperator [config.openshift.io/v1] Description ClusterOperator is the Custom Resource object which holds the current state of an operator. This object is used by operators to convey their state to the rest of the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds configuration that could apply to any operator. status object status holds the information about the state of an operator. It is consistent with status information across the Kubernetes ecosystem. 5.1.1. .spec Description spec holds configuration that could apply to any operator. Type object 5.1.2. .status Description status holds the information about the state of an operator. It is consistent with status information across the Kubernetes ecosystem. Type object Property Type Description conditions array conditions describes the state of the operator's managed and monitored components. conditions[] object ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. extension `` extension contains any additional status information specific to the operator which owns this status object. relatedObjects array relatedObjects is a list of objects that are "interesting" or related to this operator. Common uses are: 1. the detailed resource driving the operator 2. operator namespaces 3. operand namespaces relatedObjects[] object ObjectReference contains enough information to let you inspect or modify the referred object. versions array versions is a slice of operator and operand version tuples. Operators which manage multiple operands will have multiple operand entries in the array. Available operators must report the version of the operator itself with the name "operator". An operator reports a new "operator" version when it has rolled out the new version to all of its operands. versions[] object 5.1.3. .status.conditions Description conditions describes the state of the operator's managed and monitored components. Type array 5.1.4. .status.conditions[] Description ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 5.1.5. .status.relatedObjects Description relatedObjects is a list of objects that are "interesting" or related to this operator. Common uses are: 1. the detailed resource driving the operator 2. operator namespaces 3. operand namespaces Type array 5.1.6. .status.relatedObjects[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Required group name resource Property Type Description group string group of the referent. name string name of the referent. namespace string namespace of the referent. resource string resource of the referent. 5.1.7. .status.versions Description versions is a slice of operator and operand version tuples. Operators which manage multiple operands will have multiple operand entries in the array. Available operators must report the version of the operator itself with the name "operator". An operator reports a new "operator" version when it has rolled out the new version to all of its operands. Type array 5.1.8. .status.versions[] Description Type object Required name version Property Type Description name string name is the name of the particular operand this version is for. It usually matches container images, not operators. version string version indicates which version of a particular operand is currently being managed. It must always match the Available operand. If 1.0.0 is Available, then this must indicate 1.0.0 even if the operator is trying to rollout 1.1.0 5.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/clusteroperators DELETE : delete collection of ClusterOperator GET : list objects of kind ClusterOperator POST : create a ClusterOperator /apis/config.openshift.io/v1/clusteroperators/{name} DELETE : delete a ClusterOperator GET : read the specified ClusterOperator PATCH : partially update the specified ClusterOperator PUT : replace the specified ClusterOperator /apis/config.openshift.io/v1/clusteroperators/{name}/status GET : read status of the specified ClusterOperator PATCH : partially update status of the specified ClusterOperator PUT : replace status of the specified ClusterOperator 5.2.1. /apis/config.openshift.io/v1/clusteroperators HTTP method DELETE Description delete collection of ClusterOperator Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterOperator Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ClusterOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterOperator Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body ClusterOperator schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 202 - Accepted ClusterOperator schema 401 - Unauthorized Empty 5.2.2. /apis/config.openshift.io/v1/clusteroperators/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the ClusterOperator HTTP method DELETE Description delete a ClusterOperator Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterOperator Table 5.9. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterOperator Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterOperator Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body ClusterOperator schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 401 - Unauthorized Empty 5.2.3. /apis/config.openshift.io/v1/clusteroperators/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the ClusterOperator HTTP method GET Description read status of the specified ClusterOperator Table 5.16. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterOperator Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterOperator Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body ClusterOperator schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/clusteroperator-config-openshift-io-v1 |
Chapter 7. Running a custom Clair configuration with a managed Clair database | Chapter 7. Running a custom Clair configuration with a managed Clair database In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios: When a user wants to disable specific updater resources. When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Clair in disconnected environments . Note If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to true . If you are running Red Hat Quay in an disconnected environment, you should disable all updater components. 7.1. Setting a Clair database to managed Use the following procedure to set your Clair database to managed. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true 7.2. Configuring a custom Clair database with a managed Clair configuration Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output | [
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/custom-clair-configuration-managed-database |
Chapter 1. OpenShift Data Foundation deployed using dynamic devices | Chapter 1. OpenShift Data Foundation deployed using dynamic devices 1.1. OpenShift Data Foundation deployed on AWS To replace an operational node, see: Section 1.1.1, "Replacing an operational AWS node on user-provisioned infrastructure" . Section 1.1.2, "Replacing an operational AWS node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.1.3, "Replacing a failed AWS node on user-provisioned infrastructure" . Section 1.1.4, "Replacing a failed AWS node on installer-provisioned infrastructure" . 1.1.1. Replacing an operational AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Note When replacing an AWS node on user-provisioned infrastructure, the new node needs to be created in the same AWS zone as the original node. Procedure Identify the node that you need to replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Create a new Amazon Web Service (AWS) machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.2. Replacing an operational AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.3. Replacing a failed AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the Amazon Web Service (AWS) machine instance of the node that you need to replace. Log in to AWS, and terminate the AWS machine instance that you identified. Create a new AWS machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.4. Replacing a failed AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Amazon Web Service (AWS) instance is not removed automatically, terminate the instance from the AWS console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2. OpenShift Data Foundation deployed on VMware To replace an operational node, see: Section 1.2.1, "Replacing an operational VMware node on user-provisioned infrastructure" . Section 1.2.2, "Replacing an operational VMware node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.2.3, "Replacing a failed VMware node on user-provisioned infrastructure" . Section 1.2.4, "Replacing a failed VMware node on installer-provisioned infrastructure" . 1.2.1. Replacing an operational VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Log in to VMware vSphere, and terminate the VM that you identified: Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.2. Replacing an operational VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.3. Replacing a failed VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need to replace. Delete the node: <node_name> Specify the name of node that you need to replace. Log in to VMware vSphere and terminate the VM that you identified. Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.4. Replacing a failed VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for te new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Virtual Machine (VM) is not removed automatically, terminate the VM from VMware vSphere. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3. OpenShift Data Foundation deployed on Microsoft Azure 1.3.1. Replacing operational nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3.2. Replacing failed nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for the new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Azure instance is not removed automatically, terminate the instance from the Azure console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4. OpenShift Data Foundation deployed on Google cloud 1.4.1. Replacing operational nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the node that needs to be replaced. Take a note of its Machine Name . Mark the node as unschedulable using the following command: Drain the node using the following command: Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state. Important This activity may take at least 5-10 minutes or more. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4.2. Replacing failed nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the faulty node and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the web user interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From the command line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Google Cloud instance is not removed automatically, terminate the instance from Google Cloud console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . | [
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_nodes/openshift_data_foundation_deployed_using_dynamic_devices |
Chapter 13. Verify your deployment | Chapter 13. Verify your deployment After deployment is complete, verify that your deployment has completed successfully. Browse to the Administration Portal, for example, http://engine.example.com/ovirt-engine . Administration Console Login Log in using the administrative credentials added during hosted engine deployment. When login is successful, the Dashboard appears. Administration Console Dashboard Verify that your cluster is available. Administration Console Dashboard - Clusters Verify that at least one host is available. If you provided additional host details during Hosted Engine deployment, 3 hosts are visible here, as shown. Administration Console Dashboard - Hosts Click Compute Hosts . Verify that all hosts are listed with a Status of Up . Administration Console - Hosts Verify that all storage domains are available. Click Storage Domains . Verify that the Active icon is shown in the first column. Administration Console - Storage Domains | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/verify-rhhi-deployment |
14.2. Samba Daemons and Related Services | 14.2. Samba Daemons and Related Services The following is a brief introduction to the individual Samba daemons and services, as well as details on how to start and stop them. 14.2.1. Daemon Overview Samba is comprised of three daemons ( smbd , nmbd , and winbindd ). Two services ( smb and windbind ) control how the daemons are started, stopped, and other service-related features. Each daemon is listed in detail, as well as which specific service has control over it. 14.2.1.1. The smbd daemon The smbd server daemon provides file sharing and printing services to Windows clients. In addition, it is responsible for user authentication, resource locking, and data sharing through the SMB protocol. The default ports on which the server listens for SMB traffic are TCP ports 139 and 445. The smbd daemon is controlled by the smb service. 14.2.1.2. The nmbd daemon The nmbd server daemon understands and replies to NetBIOS name service requests such as those produced by SMB/CIFS in Windows-based systems. These systems include Windows 95/98/ME, Windows NT, Windows 2000, Windows XP, and LanManager clients. It also participates in the browsing protocols that make up the Windows Network Neighborhood view. The default port that the server listens to for NMB traffic is UDP port 137. The nmbd daemon is controlled by the smb service. 14.2.1.3. The winbindd daemon The winbind service resolves user and group information on a Windows NT server and makes it understandable by UNIX platforms. This is achieved by using Microsoft RPC calls, Pluggable Authentication Modules (PAM), and the Name Service Switch (NSS). This allows Windows NT domain users to appear and operate as UNIX users on a UNIX machine. Though bundled with the Samba distribution, the winbind service is controlled separately from the smb service. The winbindd daemon is controlled by the winbind service and does not require the smb service to be started in order to operate. Because winbind is a client-side service used to connect to Windows NT based servers, further discussion of winbind is beyond the scope of this manual. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-daemons |
Chapter 2. Before Configuring a Red Hat Cluster | Chapter 2. Before Configuring a Red Hat Cluster This chapter describes tasks to perform and considerations to make before installing and configuring a Red Hat Cluster, and consists of the following sections: Section 2.1, "Compatible Hardware" Section 2.2, "Enabling IP Ports" Section 2.3, "Configuring ACPI For Use with Integrated Fence Devices" Section 2.4, "Configuring max_luns" Section 2.5, "Considerations for Using Quorum Disk" Section 2.7, "Considerations for Using Conga " Section 2.8, "General Configuration Considerations" 2.1. Compatible Hardware Before configuring Red Hat Cluster software, make sure that your cluster uses appropriate hardware (for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the hardware configuration guidelines at http://www.redhat.com/cluster_suite/hardware/ for the most current hardware compatibility information. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/ch-before-config-ca |
Chapter 16. Replacing storage devices | Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found.",
"oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc get pv oc delete pv <failed-pv-name>",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/<node name> chroot /host",
"sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc get -n openshift-storage pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/<node name> chroot /host",
"lsblk"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices |
7.87. initscripts | 7.87. initscripts 7.87.1. RHBA-2013:0518 - initscript bug fix and enhancement update Updated iniscripts package that fixes several bugs and adds two enhancements are now available for Red Hat Enterprise Linux 6. The initscripts package contains basic system scripts to boot the system, change runlevels, activate and deactivate most network interfaces, and shut the system down cleanly. Bug Fixes BZ# 893395 Previously, an ip link command was called before the master device was properly set. Consequently, the slaves could be in the unknown state. This has been fixed by calling ip link for master after the device is installed properly, and all slaves are up. As a result, all slaves are in the expected state and connected to the master device. BZ# 714230 Previously, the naming policy for VLAN names was too strict. Consequently, the ifdown utility failed to work with descriptively-named interfaces. To fix this bug, the name format check has been removed and ifdown now works as expected. BZ#879243 Prior to this update, there was a typographic error in the /etc/sysconfig/network-scripts/ifup-aliases file, which caused the duplicate check to fail. The typo has been corrected and the check works again. BZ#885235 The BONDING_OPTS variable was applied by the ifup utility on a slave interface, even if the master was already on and had active slaves. This caused an error message to be returned by ifup . To address this bug, it is now checked whether the master does not have any active slaves before applying BONDING_OPTS , and no error messages are returned. BZ# 880684 Prior to this update, the arping utility, which checks for IP address duplicates in the network, failed when the parent device was not up. Consequently, the failure was handled the same way as finding of a second IP address in the network. To fix this bug, ifup-aliases files have been set to be checked whether the master device is up before the duplicity check is run. As a result, no error messages are returned when the parent device is down in the described scenario. BZ# 723936 The rename_device.c file did not correspond with VLAN interfaces, and thus could lead to improperly named physical interfaces. A patch has been provided to address this bug and interfaces are now named predictably and properly. BZ# 856209 When calling the vgchange -a y command instead of vgchange -a ay on the netfs interface by the rc.sysinit daemon, all volumes were activated. This update provides a patch to fix this bug. Now, only the volumes declared to be activated are actually activated. If the list is not declared, all volumes are activated by default. BZ#820430 Previously, when a slave was attached to a master interface, which did not have a correct mode set, the interface did not work properly and could eventually cause a kernel oops. To fix this bug, the BONDING_OPTS variables are set before the master interface is brought up, which is the correct order of setting. BZ#862788 If there was a process blocking a file system from unmounting, the /etc/init.d/halt script tried to kill all processes currently using the file system, including the script itself. Consequently, the system became unresponsive during reboot. With this update, shutdown script PIDs are excluded from the kill command, which enables the system to reboot normally. BZ# 874030 When the ifup utility was used to set up a master interface, the BONDING_OPTS variables were not applied. Consequently, bonding mode configuration done through the ifcfg utility had no effect. A patch has been provided to fix this bug. BONDING_OPTS are now applied and bonding mode works in the described scenario. BZ#824175 If a network bond device had a name that was a substring of another bond device, both devices changed their states due to an incorrect test of the bond device name. A patch has been provided in the regular expression test and bond devices change their states as expected. BZ#755699 The udev daemon is an event-driven hot-plug agent. Previously, an udev event for serial console availability was emitted only on boot. If runlevels were changed, the process was not restarted, because the event had already been processed. Consequently, the serial console was not restarted when entering and then exiting runlevel 1. With this update, the fedora.serial-console-available event is emitted on the post-stop of the serial console, and the console is now restarted as expected. BZ# 852005 Prior to this update, no check if an address had already been used was performed for alias interfaces. Consequently, an already used IP address could be assigned to an alias interface. To fix this bug, the IP address is checked whether it is already used. If it is, an error message is returned and the IP address is not assigned. BZ#852176 Previously, the init utility tried to add a bond device even if it already existed. Consequently, a warning message was returned. A patch that checks whether a bond device already exists has been provided and warning messages are no longer returned. BZ# 846140 Prior to this update, the crypttab(5) manual page did not describe handling white spaces in passwords. Now, the manual page has been updated and contains information concerning a password with white spaces. BZ# 870025 crypttab (5) manual page contained a typografic error (crypptab insted of crypttab), which has now been corrected. BZ#795778 Previously, usage description was missing in the /init/tty.conf and /init/serial.conf files and this information was not returned in error messages. With this update, the information has been added to the aforementioned files and is now returned via an error message. BZ# 669700 Prior to this update, the /dev/shm file system was mounted by the dracut utility without attributes from the /etc/fstab file. To fix this bug, /dev/shm is now remounted by the rc.sysinit script. As a result, /dev/shm now contains the attributes from /etc/fstab . BZ# 713757 version of the sysconfig.txt file instructed users to put the VLAN=yes option in the global configuration file. Consequently, interfaces with names containing a dot were recognized as VLAN interfaces. The sysconfig.txt file has been changed so that the VLAN describing line instructs users to include the VLAN option in the interface configuration file, and the aforementioned devices are no longer recognized as VLAN interfaces. BZ# 869075 The sysconfig.txt file advised users to use the saslauthd -a command instead of saslauthd -v , which caused the command to fail with an error message. In sysconfig.txt , the error in the command has been corrected and the saslauthd utility now returns expected results. BZ#714250 When the ifup utility initiated VLAN interfaces, the sysctl values were not used. With this update, ifup rereads the sysctl values in the described scenario and VLAN interfaces are configured as expected. Enhancements BZ#851370 The brctl daemon is used to connect two Ethernet segments in a protocol-independent way, based on an Ethernet address, rather than an IP address. In order to provide a simple and centralized bridge configuration, bridge options can now be used via BRIDGING_OPTS . As a result, a space-separated list of bridging options for either a bridge device or a port device can be added when the ifup utility is used. BZ#554392 The updated halt.local file has been enhanced with new variables to reflect the character of call. This change leaves users with better knowledge of how halt.local was called during a halt sequence. BZ# 815431 With this update, it is possible to disable duplicate address detection in order to allow administrators to use direct routing without ARP checks. Users of initscripts are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/initscripts |
Getting ready to install MicroShift | Getting ready to install MicroShift Red Hat build of MicroShift 4.18 Plan for your MicroShift installation and learn about important configurations Red Hat OpenShift Documentation Team | [
"sudo mkdir -p /etc/systemd/journald.conf.d",
"cat <<EOF | sudo tee /etc/systemd/journald.conf.d/microshift.conf &>/dev/null [Journal] Storage=persistent SystemMaxUse=1G RuntimeMaxUse=1G EOF",
"sudo grub2-editenv - list | grep ^boot_success",
"boot_success=1",
"sudo journalctl -o cat -u greenboot-healthcheck.service",
"Running Required Health Check Scripts STARTED GRUB boot variables: boot_success=0 boot_indeterminate=0 boot_counter=2 Waiting 300s for MicroShift service to be active and not failed FAILURE",
"sudo journalctl -o cat -u redboot-task-runner.service",
"Running Red Scripts STARTED GRUB boot variables: boot_success=0 boot_indeterminate=0 boot_counter=0 The ostree status: * rhel c0baa75d9b585f3dd989a9cf05f647eb7ca27ee0dbd4b94fe8c93ed3a4b9e4a5.0 Version: 9.1 origin: <unknown origin type> rhel 6869c1347b0e0ba1bbf0be750cdf32da5138a1fcbc5a4c6325ab9eb647b64663.0 (rollback) Version: 9.1 origin refspec: edge:rhel/9/x86_64/edge System rollback imminent - preparing MicroShift for a clean start Stopping MicroShift services Removing MicroShift pods Killing conmon, pause and OVN processes Removing OVN configuration Finished greenboot Failure Scripts Runner. Cleanup succeeded Script '40_microshift_pre_rollback.sh' SUCCESS FINISHED redboot-task-runner.service: Deactivated successfully.",
"sudo grub2-editenv - list | grep ^boot_success",
"boot_success=1"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/getting_ready_to_install_microshift/index |
Chapter 6. AWS DynamoDB | Chapter 6. AWS DynamoDB Only producer is supported The AWS2 DynamoDB component supports storing and retrieving data from/to service. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon DynamoDB. More information is available at Amazon DynamoDB . 6.1. Dependencies When using aws2-ddb Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-ddb-starter</artifactId> </dependency> 6.2. URI Format aws2-ddb://domainName[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 6.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 6.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 6.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 6.4. Component Options The AWS DynamoDB component supports 22 options, which are listed below. Name Description Default Type amazonDDBClient (producer) Autowired To use the AmazonDynamoDB as the client. DynamoDbClient configuration (producer) The component configuration. Ddb2Configuration consistentRead (producer) Determines whether or not strong consistency should be enforced when data is read. false boolean enabledInitialDescribeTable (producer) Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true boolean keyAttributeName (producer) Attribute name when creating table. String keyAttributeType (producer) Attribute type when creating table. String keyScalarType (producer) The key scalar type, it can be S (String), N (Number) and B (Bytes). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) What operation to perform. Enum values: BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable PutItem Ddb2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the DDB client. String proxyPort (producer) The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer proxyProtocol (producer) To define a proxy protocol when instantiating the DDB client. Enum values: HTTP HTTPS HTTPS Protocol readCapacity (producer) The provisioned throughput to reserve for reading resources from your table. Long region (producer) The region in which DDB client needs to work. String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean writeCapacity (producer) The provisioned throughput to reserved for writing resources to your table. Long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 6.5. Endpoint Options The AWS DynamoDB endpoint is configured using URI syntax: with the following path and query parameters: 6.5.1. Path Parameters (1 parameters) Name Description Default Type tableName (producer) Required The name of the table currently worked with. String 6.5.2. Query Parameters (20 parameters) Name Description Default Type amazonDDBClient (producer) Autowired To use the AmazonDynamoDB as the client. DynamoDbClient consistentRead (producer) Determines whether or not strong consistency should be enforced when data is read. false boolean enabledInitialDescribeTable (producer) Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true boolean keyAttributeName (producer) Attribute name when creating table. String keyAttributeType (producer) Attribute type when creating table. String keyScalarType (producer) The key scalar type, it can be S (String), N (Number) and B (Bytes). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) What operation to perform. Enum values: BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable PutItem Ddb2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the DDB client. String proxyPort (producer) The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer proxyProtocol (producer) To define a proxy protocol when instantiating the DDB client. Enum values: HTTP HTTPS HTTPS Protocol readCapacity (producer) The provisioned throughput to reserve for reading resources from your table. Long region (producer) The region in which DDB client needs to work. String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean writeCapacity (producer) The provisioned throughput to reserved for writing resources to your table. Long accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required DDB component options You have to provide the amazonDDBClient in the Registry or your accessKey and secretKey to access the Amazon's DynamoDB . 6.6. Usage 6.6.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation 6.6.2. Message headers evaluated by the DDB producer Header Type Description CamelAwsDdbBatchItems Map<String, KeysAndAttributes> A map of the table name and corresponding items to get by primary key. CamelAwsDdbTableName String Table Name for this operation. CamelAwsDdbKey Key The primary key that uniquely identifies each item in a table. CamelAwsDdbReturnValues String Use this parameter if you want to get the attribute name-value pairs before or after they are modified(NONE, ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW). CamelAwsDdbUpdateCondition Map<String, ExpectedAttributeValue> Designates an attribute for a conditional modification. CamelAwsDdbAttributeNames Collection<String> If attribute names are not specified then all attributes will be returned. CamelAwsDdbConsistentRead Boolean If set to true, then a consistent read is issued, otherwise eventually consistent is used. CamelAwsDdbIndexName String If set will be used as Secondary Index for Query operation. CamelAwsDdbItem Map<String, AttributeValue> A map of the attributes for the item, and must include the primary key values that define the item. CamelAwsDdbExactCount Boolean If set to true, Amazon DynamoDB returns a total number of items that match the query parameters, instead of a list of the matching items and their attributes. CamelAwsDdbKeyConditions Map<String, Condition> This header specify the selection criteria for the query, and merge together the two old headers CamelAwsDdbHashKeyValue and CamelAwsDdbScanRangeKeyCondition CamelAwsDdbStartKey Key Primary key of the item from which to continue an earlier query. CamelAwsDdbHashKeyValue AttributeValue Value of the hash component of the composite primary key. CamelAwsDdbLimit Integer The maximum number of items to return. CamelAwsDdbScanRangeKeyCondition Condition A container for the attribute values and comparison operators to use for the query. CamelAwsDdbScanIndexForward Boolean Specifies forward or backward traversal of the index. CamelAwsDdbScanFilter Map<String, Condition> Evaluates the scan results and returns only the desired values. CamelAwsDdbUpdateValues Map<String, AttributeValueUpdate> Map of attribute name to the new value and action for the update. 6.6.3. Message headers set during BatchGetItems operation Header Type Description CamelAwsDdbBatchResponse Map<String,BatchResponse> Table names and the respective item attributes from the tables. CamelAwsDdbUnprocessedKeys Map<String,KeysAndAttributes> Contains a map of tables and their respective keys that were not processed with the current response. 6.6.4. Message headers set during DeleteItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.5. Message headers set during DeleteTable operation Header Type Description CamelAwsDdbProvisionedThroughput ProvisionedThroughputDescription The value of the ProvisionedThroughput property for this table CamelAwsDdbCreationDate Date Creation DateTime of this table. CamelAwsDdbTableItemCount Long Item count for this table. CamelAwsDdbKeySchema KeySchema The KeySchema that identifies the primary key for this table. From Camel 2.16.0 the type of this header is List<KeySchemaElement> and not KeySchema CamelAwsDdbTableName String The table name. CamelAwsDdbTableSize Long The table size in bytes. CamelAwsDdbTableStatus String The status of the table: CREATING, UPDATING, DELETING, ACTIVE 6.6.6. Message headers set during DescribeTable operation Header Type Description CamelAwsDdbProvisionedThroughput \{{ProvisionedThroughputDescription}} The value of the ProvisionedThroughput property for this table CamelAwsDdbCreationDate Date Creation DateTime of this table. CamelAwsDdbTableItemCount Long Item count for this table. CamelAwsDdbKeySchema \{{KeySchema}} The KeySchema that identifies the primary key for this table. CamelAwsDdbTableName String The table name. CamelAwsDdbTableSize Long The table size in bytes. CamelAwsDdbTableStatus String The status of the table: CREATING, UPDATING, DELETING, ACTIVE CamelAwsDdbReadCapacity Long ReadCapacityUnits property of this table. CamelAwsDdbWriteCapacity Long WriteCapacityUnits property of this table. 6.6.7. Message headers set during GetItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.8. Message headers set during PutItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.9. Message headers set during Query operation Header Type Description CamelAwsDdbItems List<java.util.Map<String,AttributeValue>> The list of attributes returned by the operation. CamelAwsDdbLastEvaluatedKey Key Primary key of the item where the query operation stopped, inclusive of the result set. CamelAwsDdbConsumedCapacity Double The number of Capacity Units of the provisioned throughput of the table consumed during the operation. CamelAwsDdbCount Integer Number of items in the response. 6.6.10. Message headers set during Scan operation Header Type Description CamelAwsDdbItems List<java.util.Map<String,AttributeValue>> The list of attributes returned by the operation. CamelAwsDdbLastEvaluatedKey Key Primary key of the item where the query operation stopped, inclusive of the result set. CamelAwsDdbConsumedCapacity Double The number of Capacity Units of the provisioned throughput of the table consumed during the operation. CamelAwsDdbCount Integer Number of items in the response. CamelAwsDdbScannedCount Integer Number of items in the complete scan before any filters are applied. 6.6.11. Message headers set during UpdateItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.12. Advanced AmazonDynamoDB configuration If you need more control over the AmazonDynamoDB instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws2-ddb://domainName?amazonDDBClient=#client"); The #client refers to a DynamoDbClient in the Registry. 6.7. Supported producer operations BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable 6.8. Examples 6.8.1. Producer Examples PutItem: this operation will create an entry into DynamoDB from("direct:start") .setHeader(Ddb2Constants.OPERATION, Ddb2Operations.PutItem) .setHeader(Ddb2Constants.CONSISTENT_READ, "true") .setHeader(Ddb2Constants.RETURN_VALUES, "ALL_OLD") .setHeader(Ddb2Constants.ITEM, attributeMap) .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, attributeMap.keySet()); .to("aws2-ddb://" + tableName + "?keyAttributeName=" + attributeName + "&keyAttributeType=" + KeyType.HASH + "&keyScalarType=" + ScalarAttributeType.S + "&readCapacity=1&writeCapacity=1"); Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-ddb</artifactId> <version>USD{camel-version}</version> </dependency> where {camel-version} must be replaced by the actual version of Camel. 6.9. Spring Boot Auto-Configuration The component supports 40 options, which are listed below. Name Description Default Type camel.component.aws2-ddb.access-key Amazon AWS Access Key. String camel.component.aws2-ddb.amazon-d-d-b-client To use the AmazonDynamoDB as the client. The option is a software.amazon.awssdk.services.dynamodb.DynamoDbClient type. DynamoDbClient camel.component.aws2-ddb.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-ddb.configuration The component configuration. The option is a org.apache.camel.component.aws2.ddb.Ddb2Configuration type. Ddb2Configuration camel.component.aws2-ddb.consistent-read Determines whether or not strong consistency should be enforced when data is read. false Boolean camel.component.aws2-ddb.enabled Whether to enable auto configuration of the aws2-ddb component. This is enabled by default. Boolean camel.component.aws2-ddb.enabled-initial-describe-table Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true Boolean camel.component.aws2-ddb.key-attribute-name Attribute name when creating table. String camel.component.aws2-ddb.key-attribute-type Attribute type when creating table. String camel.component.aws2-ddb.key-scalar-type The key scalar type, it can be S (String), N (Number) and B (Bytes). String camel.component.aws2-ddb.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-ddb.operation What operation to perform. Ddb2Operations camel.component.aws2-ddb.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-ddb.proxy-host To define a proxy host when instantiating the DDB client. String camel.component.aws2-ddb.proxy-port The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer camel.component.aws2-ddb.proxy-protocol To define a proxy protocol when instantiating the DDB client. Protocol camel.component.aws2-ddb.read-capacity The provisioned throughput to reserve for reading resources from your table. Long camel.component.aws2-ddb.region The region in which DDB client needs to work. String camel.component.aws2-ddb.secret-key Amazon AWS Secret Key. String camel.component.aws2-ddb.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-ddb.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-ddb.use-default-credentials-provider Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean camel.component.aws2-ddb.write-capacity The provisioned throughput to reserved for writing resources to your table. Long camel.component.aws2-ddbstream.access-key Amazon AWS Access Key. String camel.component.aws2-ddbstream.amazon-dynamo-db-streams-client Amazon DynamoDB client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.dynamodb.streams.DynamoDbStreamsClient type. DynamoDbStreamsClient camel.component.aws2-ddbstream.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-ddbstream.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.aws2-ddbstream.configuration The component configuration. The option is a org.apache.camel.component.aws2.ddbstream.Ddb2StreamConfiguration type. Ddb2StreamConfiguration camel.component.aws2-ddbstream.enabled Whether to enable auto configuration of the aws2-ddbstream component. This is enabled by default. Boolean camel.component.aws2-ddbstream.max-results-per-request Maximum number of records that will be fetched in each poll. Integer camel.component.aws2-ddbstream.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-ddbstream.proxy-host To define a proxy host when instantiating the DDBStreams client. String camel.component.aws2-ddbstream.proxy-port To define a proxy port when instantiating the DDBStreams client. Integer camel.component.aws2-ddbstream.proxy-protocol To define a proxy protocol when instantiating the DDBStreams client. Protocol camel.component.aws2-ddbstream.region The region in which DDBStreams client needs to work. String camel.component.aws2-ddbstream.secret-key Amazon AWS Secret Key. String camel.component.aws2-ddbstream.stream-iterator-type Defines where in the DynamoDB stream to start getting records. Note that using FROM_START can cause a significant delay before the stream has caught up to real-time. Ddb2StreamConfigurationUSDStreamIteratorType camel.component.aws2-ddbstream.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-ddbstream.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-ddbstream.use-default-credentials-provider Set whether the DynamoDB Streams client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-ddb-starter</artifactId> </dependency>",
"aws2-ddb://domainName[?options]",
"aws2-ddb:tableName",
"from(\"direct:start\") .to(\"aws2-ddb://domainName?amazonDDBClient=#client\");",
"from(\"direct:start\") .setHeader(Ddb2Constants.OPERATION, Ddb2Operations.PutItem) .setHeader(Ddb2Constants.CONSISTENT_READ, \"true\") .setHeader(Ddb2Constants.RETURN_VALUES, \"ALL_OLD\") .setHeader(Ddb2Constants.ITEM, attributeMap) .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, attributeMap.keySet()); .to(\"aws2-ddb://\" + tableName + \"?keyAttributeName=\" + attributeName + \"&keyAttributeType=\" + KeyType.HASH + \"&keyScalarType=\" + ScalarAttributeType.S + \"&readCapacity=1&writeCapacity=1\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-ddb</artifactId> <version>USD{camel-version}</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-ddb-component-starter |
Chapter 6. Expanding the cluster | Chapter 6. Expanding the cluster After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites. Note Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media. 6.1. Preparing the bare metal node To expand your cluster, you must provide the node with the relevant IP address. This can be done with a static configuration, or with a DHCP (Dynamic Host Configuration protocol) server. When expanding the cluster using a DHCP server, each node must have a DHCP reservation. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "Optional: Configuring host network interfaces in the install-config.yaml file" in the "Setting up the environment for an OpenShift installation" section for additional details. Preparing the bare metal node requires executing the following procedure from the provisioner node. Procedure Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc USD sudo cp oc /usr/local/bin Power off the bare metal node by using the baseboard management controller (BMC), and ensure it is off. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create base64 strings from the user name and password: USD echo -ne "root" | base64 USD echo -ne "password" | base64 Create a configuration file for the bare metal node. Depending on whether you are using a static configuration or a DHCP server, use one of the following example bmh.yaml files, replacing values in the YAML to match your environment: USD vim bmh.yaml Static configuration bmh.yaml : --- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 8 -hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21 1 To configure the network interface for a newly created node, specify the name of the secret that contains the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Optional: Configuring host network interfaces in the install-config.yaml file" for details on configuring NMState syntax. 2 10 13 16 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 3 Add the NMState YAML syntax to configure the host interfaces. 4 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false as shown: --- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false 5 6 7 8 9 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. 11 12 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 14 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 15 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 17 To skip certificate validation, set disableCertificateVerification to true. 18 19 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 20 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 21 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. DHCP configuration bmh.yaml : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12 1 4 7 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 2 3 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 5 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 6 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 8 To skip certificate validation, set disableCertificateVerification to true. 9 10 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 11 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 12 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. Note If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a host duplicate MAC address" for more information. Create the bare metal node: USD oc -n openshift-machine-api create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Where <num> will be the worker number. Power up and inspect the bare metal node: USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true Note To allow the worker node to join the cluster, scale the machineset object to the number of the BareMetalHost objects. You can scale nodes either manually or automatically. To scale nodes automatically, use the metal3.io/autoscale-to-hosts annotation for machineset . Additional resources See Optional: Configuring host network interfaces in the install-config.yaml file for details on configuring the NMState syntax. See Automatically scaling machines to the number of available bare metal hosts for details on automatically scaling machines. 6.2. Replacing a bare-metal control plane node Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section. Procedure Ensure that the Bare Metal Operator is available: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.12.0 True False False 3d15h Remove the old BareMetalHost and Machine objects: USD oc delete bmh -n openshift-machine-api <host_name> USD oc delete machine -n openshift-machine-api <machine_name> Replace <host_name> with the name of the host and <machine_name> with the name of the machine. The machine name appears under the CONSUMER field. After you remove the BareMetalHost and Machine objects, then the machine controller automatically deletes the Node object. Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF 1 4 6 Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field. 2 Replace <base64_of_uid> with the base64 string of the user name. 3 Replace <base64_of_pwd> with the base64 string of the password. 5 Replace <protocol> with the BMC protocol, such as redfish , redfish-virtualmedia , idrac-virtualmedia , or others. Replace <bmc_ip> with the IP address of the bare metal node's baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. 7 Replace <NIC1_mac_address> with the MAC address of the bare metal node's first NIC. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. View available BareMetalHost objects: USD oc get bmh -n openshift-machine-api Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m There are no MachineSet objects for control plane nodes, so you must create a Machine object instead. You can copy the providerSpec from another control plane Machine object. Create a Machine object: USD cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF 1 2 3 Replace <num> for the control plane number of the bare metal node in the name , labels and annotations fields. To view the BareMetalHost objects, run the following command: USD oc get bmh -A Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m After the RHCOS installation, verify that the BareMetalHost is added to the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.18.2 control-plane-2.example.com available master 141m v1.18.2 control-plane-3.example.com available master 141m v1.18.2 compute-1.example.com available worker 87m v1.18.2 compute-2.example.com available worker 87m v1.18.2 Note After replacement of the new control plane node, the etcd pod running in the new node is in crashloopback status. See "Replacing an unhealthy etcd member" in the Additional resources section for more information. Additional resources Replacing an unhealthy etcd member Backing up etcd Bare metal configuration BMC addressing 6.3. Preparing to deploy with Virtual Media on the baremetal network If the provisioning network is enabled and you want to expand the cluster using Virtual Media on the baremetal network, use the following procedure. Prerequisites There is an existing cluster with a baremetal network and a provisioning network. Procedure Edit the provisioning custom resource (CR) to enable deploying with Virtual Media on the baremetal network: oc edit provisioning apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: "2021-08-05T18:51:50Z" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: "551591" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: "" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: "" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0 1 Add virtualMediaViaExternalNetwork: true to the provisioning CR. If the image URL exists, edit the machineset to use the API VIP address. This step only applies to clusters installed in versions 4.9 or earlier. oc edit machineset apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: "2021-08-05T18:51:52Z" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: "551513" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2 1 Edit the checksum URL to use the API VIP address. 2 Edit the url URL to use the API VIP address. 6.4. Diagnosing a duplicate MAC address when provisioning a new host in the cluster If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host. You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace. Prerequisites Install an OpenShift Container Platform cluster on bare metal. Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following: Get the bare-metal hosts running in the openshift-machine-api namespace: USD oc get bmh -n openshift-machine-api Example output NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name> with the name of the host: USD oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml Example output ... status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error ... 6.5. Provisioning the bare metal node Provisioning the bare metal node requires executing the following procedure from the provisioner node. Procedure Ensure the STATE is available before provisioning the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. NAME STATE ONLINE ERROR AGE openshift-worker available true 34h Get a count of the number of worker nodes. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.25.0 openshift-master-2.openshift.example.com Ready master 30h v1.25.0 openshift-master-3.openshift.example.com Ready master 30h v1.25.0 openshift-worker-0.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-1.openshift.example.com Ready worker 30h v1.25.0 Get the compute machine set. USD oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m Increase the number of worker nodes by one. USD oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api Replace <num> with the new number of worker nodes. Replace <machineset> with the name of the compute machine set from the step. Check the status of the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. The STATE changes from ready to provisioning . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true The provisioning status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the state will change to provisioned . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true After provisioning completes, ensure the bare metal node is ready. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.25.0 openshift-master-2.openshift.example.com Ready master 30h v1.25.0 openshift-master-3.openshift.example.com Ready master 30h v1.25.0 openshift-worker-0.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-1.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.25.0 You can also check the kubelet. USD ssh openshift-worker-<num> [kni@openshift-worker-<num>]USD journalctl -fu kubelet | [
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"echo -ne \"root\" | base64",
"echo -ne \"password\" | base64",
"vim bmh.yaml",
"--- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 8 next-hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21",
"--- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12",
"oc -n openshift-machine-api create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.12.0 True False False 3d15h",
"oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF",
"oc get bmh -n openshift-machine-api",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m",
"cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF",
"oc get bmh -A",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.18.2 control-plane-2.example.com available master 141m v1.18.2 control-plane-3.example.com available master 141m v1.18.2 compute-1.example.com available worker 87m v1.18.2 compute-2.example.com available worker 87m v1.18.2",
"edit provisioning",
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0",
"edit machineset",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2",
"oc get bmh -n openshift-machine-api",
"NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering",
"oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml",
"status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE ONLINE ERROR AGE openshift-worker available true 34h",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.25.0 openshift-master-2.openshift.example.com Ready master 30h v1.25.0 openshift-master-3.openshift.example.com Ready master 30h v1.25.0 openshift-worker-0.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-1.openshift.example.com Ready worker 30h v1.25.0",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m",
"oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.25.0 openshift-master-2.openshift.example.com Ready master 30h v1.25.0 openshift-master-3.openshift.example.com Ready master 30h v1.25.0 openshift-worker-0.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-1.openshift.example.com Ready worker 30h v1.25.0 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.25.0",
"ssh openshift-worker-<num>",
"[kni@openshift-worker-<num>]USD journalctl -fu kubelet"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-expanding-the-cluster |
Chapter 4. Configuring persistent storage | Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . Some familiarity with Kubernetes and AWS is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High-availability of storage in the infrastructure is left to the underlying storage provider. For OpenShift Container Platform, automatic migration from AWS EBS in-tree to the Container Storage Interface (CSI) driver is available as a Technology Preview (TP) feature. With migration enabled, volumes provisioned using the existing in-tree driver are automatically migrated to use the AWS EBS CSI driver. For more information, see CSI automatic migration feature . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes. 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Important In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume Important FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plug-in. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision gcePD storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.8.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.8.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV). Create a file, pv.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 Used to bind persistent volume claim requests to this persistent volume. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system. It is safe to mount the host by using /host . Create the PV from the file: USD oc create -f pv.yaml Define the persistent volume claim (PVC). Create a file, pvc.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.8.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.9. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.9.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.9.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.9.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.9.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.9.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.9.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.10. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.10.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate namespace openshift-local-storage openshift.io/node-selector='' Optional: Allow local storage to run on the management pool of CPUs in single-node deployment. Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the managment pool. Perform this step on single-node installations that use management workload partitioning. To allow Local Storage Operator to run on the management CPU pool, run following commands: USD oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Run the following command to get the OpenShift Container Platform major and minor version. It is required for the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "USD{OC_VERSION}" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.10.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The file system that is created when the local volume is mounted for the first time. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note A raw block volume ( volumeMode: block ) is not formatted with a file system. You should use this mode only if any application running on the pod can use raw block devices. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "localblock-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The path containing a list of local storage devices to choose from. 6 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.10.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h 4.10.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.10.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.10.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. However, automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on bare metal. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: In the Administrator perspective, navigate to Operators Installed Operators and click on the Local Volume Discovery tab. Click Create Local Volume Discovery . Select either All nodes or Select nodes , depending on whether you want to discover available disks on all or specific nodes. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Click Create . A local volume discovery instance named auto-discover-devices is displayed. To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.10.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "localblock-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.10.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Managing metrics . 4.10.9. Deleting the Local Storage Operator resources 4.10.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete any symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. Create a debug pod on the node: USD oc debug node/<node-name> Change your root directory to /host : USD chroot /host Navigate to the directory containing the local volume symlinks. USD cd /mnt/openshift-local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. Delete the symlink belonging to the removed device. USD rm <symlink> 4.10.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.11. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Network File System (NFS) 4.11.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.11.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.11.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.11.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.11.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.11.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.11.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.12. Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. Red Hat OpenShift Data Foundation provides its own documentation library. The complete set of Red Hat OpenShift Data Foundation documentation identified below is available in the Product Documentation for Red Hat OpenShift Data Foundation 4.10 . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: Planning What's new, known issues, notable bug fixes, and Technology Previews Red Hat OpenShift Data Foundation 4.10 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your Red Hat OpenShift Data Foundation 4.10 deployment Deploying Deploying Red Hat OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.10 using Amazon Web Services Deploying Red Hat OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.10 using bare metal infrastructure Deploying Red Hat OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.10 in external mode Deploying and managing Red Hat OpenShift Data Foundation on existing Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.10 using Google Cloud Deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z infrastructure Deploying OpenShift Data Foundation using IBM Z Deploying Red Hat OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation using IBM Power Deploying Red Hat OpenShift Data Foundation on IBM Cloud Deploying OpenShift Data Foundation using IBM Cloud Deploying and managing Red Hat OpenShift Data Foundation on Red Hat OpenStack Platform (RHOSP) Deploying and managing OpenShift Data Foundation 4.10 using Red Hat OpenStack Platform Deploying and managing Red Hat OpenShift Data Foundation on Red Hat Virtualization (RHV) Deploying and managing OpenShift Data Foundation 4.10 using Red Hat Virtualization Platform Deploying Red Hat OpenShift Data Foundation on VMware vSphere clusters Deploying OpenShift Data Foundation 4.10 on VMware vSphere Updating Red Hat OpenShift Data Foundation to the latest version Updating OpenShift Data Foundation Managing Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.10 cluster Monitoring OpenShift Data Foundation 4.10 Troubleshooting errors and issues Troubleshooting OpenShift Data Foundation 4.10 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration Toolkit for Containers 4.13. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources VMware vSphere 4.13.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.13.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.13.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.13.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.13.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.13.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs. | [
"cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF",
"cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF",
"cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF",
"oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false",
"apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5",
"oc create -f cinder-persistentvolume.yaml",
"oc create serviceaccount <service_account>",
"oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4",
"{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }",
"{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar",
"\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"",
"apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''",
"apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4",
"oc create -f pv.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual",
"oc create -f pvc.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false",
"oc adm new-project openshift-local-storage",
"oc annotate namespace openshift-local-storage openshift.io/node-selector=''",
"oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f openshift-local-storage.yaml",
"oc -n openshift-local-storage get pods",
"NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m",
"oc get csvs -n openshift-local-storage",
"NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6",
"oc create -f <local-volume>.yaml",
"oc get all -n openshift-local-storage",
"NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"oc create -f <example-pv>.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4",
"oc create -f <local-pvc>.yaml",
"apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3",
"oc create -f <local-pod>.yaml",
"apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM",
"oc apply -f local-volume-set.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg",
"spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists",
"oc edit localvolume <name> -n openshift-local-storage",
"oc delete pv <pv-name>",
"oc debug node/<node-name>",
"chroot /host",
"cd /mnt/openshift-local-storage/<sc-name> 1",
"rm <symlink>",
"oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces",
"oc delete pv <pv-name>",
"oc delete project openshift-local-storage",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7",
"oc get pv",
"NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m",
"ls -lZ /opt/nfs -d",
"drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs",
"id nfsnobody",
"uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)",
"spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2",
"spec: containers: 1 - name: securityContext: runAsUser: 65534 2",
"setsebool -P virt_use_nfs 1",
"/<example_fs> *(rw,root_squash)",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3",
"oc create -f pvc.yaml",
"vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk",
"shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5",
"oc create -f pv1.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4",
"oc create -f pvc1.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/storage/configuring-persistent-storage |
Chapter 1. The Image service (glance) | Chapter 1. The Image service (glance) Manage images and storage in Red Hat OpenStack Platform (RHOSP). 1.1. Virtual Machine (VM) image formats A VM image is a file that contains a virtual disk with a bootable operating system installed. VM images are supported in different formats. The following formats are available in Red Hat OpenStack Platform (RHOSP): RAW - Unstructured disk image format. QCOW2 - Disk format supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. ISO - Sector-by-sector copy of the data on a disk, stored in a binary file. AKI - Indicates an Amazon Kernel Image. AMI - Indicates an Amazon Machine Image. ARI - Indicates an Amazon RAMDisk Image. VDI - Disk format supported by VirtualBox VM monitor and the QEMU emulator. VHD - Common disk format used by VM monitors from VMware, VirtualBox, and others. PLOOP - A disk format supported and used by Virtuozzo to run OS containers. OVA - Indicates that what is stored in the Image service (glance) is an OVA tar archive file. DOCKER - Indicates that what is stored in the Image service (glance) is a Docker tar archive of the container file system. Although ISO is not normally considered a VM image format, because ISOs contain bootable file systems with an installed operating system, you use them in the same way as other VM image files. 1.2. Supported Image service back ends The following Image service (glance) back-end scenarios are supported: RADOS Block Device (RBD) is the default back end when you use Ceph. RBD multi-store. Object Storage (swift). The Image service uses the Object Storage type and back end as the default. Block Storage (cinder). NFS Important Although NFS is a supported Image service deployment option, more robust options are available. NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share. In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end. However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks: Use a reliable production-grade NFS back end. Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended. Include monitoring and alerts for the mounted share. Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store. Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt. 1.3. Image signing and verification Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties. Note Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks. For information on image signing and verification, see Validating Image service (glance) images in the Manage Secrets with OpenStack Key Manager guide. 1.4. Image conversion Image conversion converts images by calling the task API while importing an image. As part of the import workflow, a plugin provides the image conversion. You can activate or deactivate this plugin based on the deployment configuration. The deployer needs to specify the preferred format of images for the deployment. Internally, the Image service (glance) receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially. You can trigger image conversion only when importing an image. It does not run when uploading an image. Use the Image service command-line client for image management. For example: 1.5. Interoperable image import The interoperable image import workflow enables you to import images in two ways: Use the web-download (default) method to import images from a URI. Use the glance-direct method to import images from a local file system. Use the copy-image method to copy an existing image to other Image service (glance) back ends that are in your deployment. Use this import method only if multiple Image service back ends are enabled in your deployment. 1.6. Improving scalability with Image service caching Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations. Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates: Procedure In an environment file, set the value of the GlanceCacheEnabled parameter to true , which automatically sets the flavor value to keystone+cachemanagement in the glance-api.conf heat template: Include the environment file in the openstack overcloud deploy command when you redeploy the overcloud. Optional: Tune the glance_cache_pruner to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes: Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency: The size of the files that you want to cache in your environment. The amount of available file system space. The frequency at which the environment caches images. 1.7. Image pre-caching Red Hat OpenStack Platform (RHOSP) director can pre-cache images as part of the glance-api service. Use the Image service (glance) command-line client for image management. 1.7.1. Configuring the default interval for periodic image pre-caching Red Hat OpenStack Platform (RHOSP) director can pre-cache images as part of the glance-api service. The pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api service is running. To change the default time, you can set the cache_prefetcher_interval parameter under the Default section in glance-api.conf. Procedure Add a new interval with the ExtraConfig parameter in an environment file on the undercloud according to your requirements: Replace <300> with the number of seconds that you want as an interval to pre-cache images. After you adjust the interval in the environment file in /home/stack/templates/ , log in as the stack user and deploy the configuration: Replace <env_file> with the name of the environment file that contains the ExtraConfig settings that you added. Important If you passed any extra environment files when you created the overcloud, pass them again here by using the -e option to avoid making undesired changes to the overcloud. For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide. 1.7.2. Using a periodic job to pre-cache an image Use a periodic job to pre-cache an image. Prerequisites To use a periodic job to pre-cache an image, you must use the glance-cache-manage command connected directly to the node where the glance_api service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api service is running, run commands on the first overcloud node, which is called controller-0 by default. Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage commands from inside the glance-api container. Procedure Log in to the undercloud as the stack user and identify the provisioning IP address of controller-0 : To authenticate to the overcloud, copy the credentials that are stored in /home/stack/overcloudrc , by default, to controller-0 : Connect to controller-0 : On controller-0 as the tripleo-admin user, identify the IP address of the glance_api service . In the following example, the IP address is 172.25.1.105 : Because the glance-cache-manage command is only available in the glance_api container, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script called glance_pod.sh in /home/tripleo-admin on controller-0 with the following contents: Source the overcloudrc file and run the glance_pod.sh script to exec into the glance_api container with the necessary environment variables to authenticate to the overcloud Controller node. Use a command such as glance image-list to verify that the container can run authenticated commands against the overcloud. Procedure As the admin user, queue an image to cache: Replace <host_ip> with the IP address of the Controller node where the glance-api container is running. Replace <image_id> with the ID of the image that you want to queue. When you have queued the images that you want to pre-cache, the cache_images periodic job prefetches all queued images concurrently. Note Because the image cache is local to each node, if your Red Hat OpenStack Platform is deployed with HA (with 3, 5, or 7 Controllers) then you must specify the host address with the --host option when you run the glance-cache-manage command. Run the following command to view the images in the image cache: Replace <host_ip> with the IP address of the host in your environment. Related information You can use additional glance-cache-manage commands for the following purposes: list-cached to list all images that are currently cached. list-queued to list all images that are currently queued for caching. queue-image to queue an image for caching. delete-cached-image to purge an image from the cache. delete-all-cached-images to remove all images from the cache. delete-queued-image to delete an image from the cache queue. delete-all-queued-images to delete all images from the cache queue. 1.8. Using the Image service API to enable sparse image upload With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space. Use the Image service command-line client for image management. Limitations Sparse image upload is supported only with Ceph RADOS Block Device (RBD). Sparse image upload is not supported for file systems. Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end. Procedure Log in to the undercloud node as the stack user. Source the stackrc credentials file: Create an environment file with the following content: Add your new environment file to the stack with your other environment files and deploy the overcloud: For more information about uploading images, see Uploading an image . Verification You can import an image and check its size to verify sparse image upload. The following procedure uses example commands. Replace the values with those from your environment where appropriate. Download the image file locally: Replace <file_location> with the location of the file. Replace <file_name> with the name of the file. For example: Check the disk size and the virtual size of the image to be uploaded: For example: Import the image: Record the image ID. It is required in a subsequent step. Verify that the image is imported and in an active state: From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1: Optional: You can confirm that rbd_thin_provisioning is configured in the Image service configuration file on the Controller nodes: Use SSH to access a Controller node: Confirm that rbd_thin_provisioning equals True on that Controller node: 1.9. Secure metadef APIs In Red Hat OpenStack Platform (RHOSP), users can define key value pairs and tag metadata with metadata definition (metadef) APIs. Currently, there is no limit on the number of metadef namespaces, objects, properties, resources, or tags that users can create. Metadef APIs can leak information to unauthorized users. A malicious user can exploit the lack of restrictions and fill the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. Image service policies control metadef APIs. However, the default policy setting for metadef APIs allows all users to create or read the metadef information. Because metadef resources are not isolated to the owner, metadef resources with potentially sensitive names, such as internal infrastructure details or customer names, can expose that information to malicious users. 1.9.1. Configuring a policy to restrict metadef APIs To make the Image service (glance) more secure, restrict metadef modification APIs to admin-only access by default in your Red Hat OpenStack Platform (RHOSP) deployments. Procedure As a cloud administrator, create a separate heat template environment file, such as lock-down-glance-metadef-api.yaml , to contain policy overrides for the Image service metadef API: Include the environment file that contains the policy overrides in the deployment command with the -e option when you deploy the overcloud: 1.9.2. Enabling metadef APIs If you previously restricted metadata definition (metadef) APIs or want to relax the new defaults, you can override metadef modification policies to allow users to update their respective resources. Important Cloud administrators with users who depend on write access to the metadef APIs can make those APIs accessible to all users. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read access is enabled for all users. Procedure As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example: Configure the policy override file to allow metadef API read-write access to all users: Note You must configure all metadef policies to use rule:metadeta_default . Include the new policy file in the deployment command with the -e option when you deploy the overcloud: | [
"glance image-create-via-import --disk-format qcow2 --container-format bare --name NAME --visibility public --import-method web-download --uri http://server/image.qcow2",
"parameter_defaults: GlanceCacheEnabled: true",
"parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'",
"parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'",
"openstack overcloud deploy --templates -e /home/stack/templates/<env_file>.yaml",
"(undercloud) [stack@site-undercloud-0 ~]USD openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (undercloud) [stack@site-undercloud-0 ~]USD",
"scp ~/overcloudrc [email protected]:/home/tripleo-admin/",
"ssh [email protected]",
"(overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2",
"sudo podman exec -ti -e NOVA_VERSION=USDNOVA_VERSION -e COMPUTE_API_VERSION=USDCOMPUTE_API_VERSION -e OS_USERNAME=USDOS_USERNAME -e OS_PROJECT_NAME=USDOS_PROJECT_NAME -e OS_USER_DOMAIN_NAME=USDOS_USER_DOMAIN_NAME -e OS_PROJECT_DOMAIN_NAME=USDOS_PROJECT_DOMAIN_NAME -e OS_NO_CACHE=USDOS_NO_CACHE -e OS_CLOUDNAME=USDOS_CLOUDNAME -e no_proxy=USDno_proxy -e OS_AUTH_TYPE=USDOS_AUTH_TYPE -e OS_PASSWORD=USDOS_PASSWORD -e OS_AUTH_URL=USDOS_AUTH_URL -e OS_IDENTITY_API_VERSION=USDOS_IDENTITY_API_VERSION -e OS_COMPUTE_API_VERSION=USDOS_COMPUTE_API_VERSION -e OS_IMAGE_API_VERSION=USDOS_IMAGE_API_VERSION -e OS_VOLUME_API_VERSION=USDOS_VOLUME_API_VERSION -e OS_REGION_NAME=USDOS_REGION_NAME glance_api /bin/bash",
"[tripleo-admin@controller-0 ~]USD source overcloudrc (overcloudrc) [tripleo-admin@central-controller-0 ~]USD bash glance_pod.sh ()[glance@controller-0 /]USD",
"()[glance@controller-0 /]USD glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]USD",
"glance-cache-manage --host=<host_ip> queue-image <image_id>",
"glance-cache-manage --host=<host_ip> list-cached",
"source stackrc",
"parameter_defaults: GlanceSparseUploadEnabled: true",
"openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <new_environment_file>.yaml",
"wget <file_location>/<file_name>",
"wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2",
"qemu-img info <file_name>",
"qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2 image: CentOS-6-x86_64-GenericCloud-1508.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 1.09 GiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 1",
"glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>",
"glance image show <image_id>",
"sudo rbd -p images diff <image_id> | awk '{ SUM += USD2 } END { print SUM/1024/1024/1024 \" GB\" }' 1.03906 GB",
"ssh -A -t tripleo-admin@<controller_node_IP_address>",
"sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'",
"parameter_defaults: GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-metadef_admin: { key: 'metadef_admin', value: 'role:admin' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_admin' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_admin' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_admin' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_admin' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_admin' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_admin' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_admin' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_admin' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_admin' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_admin' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_admin' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_admin' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_admin' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_admin' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_admin' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_admin' } } ...",
"openstack overcloud deploy -e lock-down-glance-metadef-api.yaml",
"cat open-up-glance-api-metadef.yaml",
"GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }",
"openstack overcloud deploy -e open-up-glance-api-metadef.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_images/assembly_image-service_osp |
Chapter 6. ControllerRevision [apps/v1] | Chapter 6. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object Required revision 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data RawExtension Data is the serialized representation of the state. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision integer Revision indicates the revision of the state represented by Data. 6.2. API endpoints The following API endpoints are available: /apis/apps/v1/controllerrevisions GET : list or watch objects of kind ControllerRevision /apis/apps/v1/watch/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions DELETE : delete collection of ControllerRevision GET : list or watch objects of kind ControllerRevision POST : create a ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} DELETE : delete a ControllerRevision GET : read the specified ControllerRevision PATCH : partially update the specified ControllerRevision PUT : replace the specified ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} GET : watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/apps/v1/controllerrevisions HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.1. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty 6.2.2. /apis/apps/v1/watch/controllerrevisions HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/apps/v1/namespaces/{namespace}/controllerrevisions HTTP method DELETE Description delete collection of ControllerRevision Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerRevision Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ControllerRevision schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 202 - Accepted ControllerRevision schema 401 - Unauthorized Empty 6.2.4. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the ControllerRevision HTTP method DELETE Description delete a ControllerRevision Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerRevision Table 6.13. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerRevision Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerRevision Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body ControllerRevision schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty 6.2.6. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the ControllerRevision HTTP method GET Description watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/metadata_apis/controllerrevision-apps-v1 |
Chapter 8. Registering clients to the load balancer | Chapter 8. Registering clients to the load balancer To balance the load of network traffic from clients, you must register the clients to the load balancer. To register clients, proceed with one of the following procedures: Section 8.1, "Registering clients using host registration" Section 8.2, " (Deprecated) Registering clients using the bootstrap script" 8.1. Registering clients using host registration You can register hosts with Satellite using the host registration feature in the Satellite web UI, Hammer CLI, or the Satellite API. For more information, see Registering Hosts in Managing hosts . Prerequisites You have set the load balancer for host registration. For more information, see Chapter 5, Setting the load balancer for host registration . Procedure In the Satellite web UI, navigate to Hosts > Register Host . From the Capsule dropdown list, select the load balancer. Select Force to register a host that has been previously registered to a Capsule Server. From the Activation Keys list, select the activation keys to assign to your host. Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Include the --smart-proxy-id My_Capsule_ID option. You can use the ID of any Capsule Server that you configured for host registration load balancing. Satellite will apply the load balancer to the registration command automatically. Include the --force option to register a host that has been previously registered to a Capsule Server. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in Managing content . Include { "smart_proxy_id": My_Capsule_ID } . You can use the ID of any Capsule Server that you configured for host registration load balancing. Satellite will apply the load balancer to the registration command automatically. Include { "force": true } to register a host that has been previously registered to a Capsule Server. To enter a password as a command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. Alternatively, you can use a temporary personal access token instead of a password. To generate a token in the Satellite web UI, navigate to My Account > Personal Access Tokens . Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. 8.2. (Deprecated) Registering clients using the bootstrap script To register clients, enter the following command on the client. You must complete the registration procedure for each client. Prerequisites Ensure that you install the bootstrap script on the client and change file permissions of the script to executable. For more information, see Registering Hosts to Red Hat Satellite Using The Bootstrap Script in Managing hosts . Procedure On Red Hat Enterprise Linux 8, enter the following command: 1 Replace <arch> with the client architecture, for example x86 . 2 Include the --force option to register the client that has been previously registered to a standalone Capsule. 3 Include the --puppet-ca-port 8141 option if you use Puppet. On Red Hat Enterprise Linux 7 or 6, enter the following command: 1 Include the --force option to register the client that has been previously registered to a standalone Capsule. 2 Include the --puppet-ca-port 8141 option if you use Puppet. The script prompts for the password corresponding to the Satellite user name you entered with the --login option. | [
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"/usr/libexec/platform-python bootstrap.py --activationkey=\" My_Activation_Key \" --enablerepos=satellite-client-6-for-rhel-8-<arch>-rpms \\ 1 --force \\ 2 --hostgroup=\" My_Host_Group \" --location=\" My_Location \" --login= admin --organization=\" My_Organization \" --puppet-ca-port 8141 \\ 3 --server loadbalancer.example.com",
"python bootstrap.py --login= admin --activationkey=\" My_Activation_Key \" --enablerepos=rhel-7-server-satellite-client-6-rpms --force \\ 1 --hostgroup=\" My_Host_Group \" --location=\" My_Location \" --organization=\" My_Organization \" --puppet-ca-port 8141 \\ 2 --server loadbalancer.example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_capsules_with_a_load_balancer/Registering_Clients_to_the_Load_Balancer_load-balancing |
probe::udp.disconnect | probe::udp.disconnect Name probe::udp.disconnect - Fires when a process requests for a UDP disconnection Synopsis udp.disconnect Values daddr A string representing the destination IP address sock Network socket used by the process saddr A string representing the source IP address sport UDP source port flags Flags (e.g. FIN, etc) dport UDP destination port name The name of this probe family IP address family Context The process which requests a UDP disconnection | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-udp-disconnect |
Chapter 7. opm CLI | Chapter 7. opm CLI 7.1. Installing the opm CLI 7.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 7.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Important There is currently a known issue where the version of the opm CLI tool released with OpenShift Container Platform 4.15 does not support RHEL 8. As a workaround, RHEL 8 users can navigate to the OpenShift mirror site and download the latest version of the tarball released with OpenShift Container Platform 4.14. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version 7.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 7.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Warning The opm CLI is not forward compatible. The version of the opm CLI used to generate catalog content must be earlier than or equal to the version used to serve the content on a cluster. Table 7.1. Global flags Flag Description -skip-tls-verify Skip TLS certificate verification for container image registries while pulling bundles or indexes. --use-http When you pull bundles, use plain HTTP for container image registries. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 7.2.1. generate Generate various artifacts for declarative config indexes. Command syntax USD opm generate <subcommand> [<flags>] Table 7.2. generate subcommands Subcommand Description dockerfile Generate a Dockerfile for a declarative config index. Table 7.3. generate flags Flags Description -h , --help Help for generate. 7.2.1.1. dockerfile Generate a Dockerfile for a declarative config index. Important This command creates a Dockerfile in the same directory as the <dcRootDir> (named <dcDirName>.Dockerfile ) that is used to build the index. If a Dockerfile with the same name already exists, this command fails. When specifying extra labels, if duplicate keys exist, only the last value of each duplicate key gets added to the generated Dockerfile. Command syntax USD opm generate dockerfile <dcRootDir> [<flags>] Table 7.4. generate dockerfile flags Flag Description -i, --binary-image (string) Image in which to build catalog. The default value is quay.io/operator-framework/opm:latest . -l , --extra-labels (string) Extra labels to include in the generated Dockerfile. Labels have the form key=value . -h , --help Help for Dockerfile. Note To build with the official Red Hat image, use the registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 value with the -i flag. 7.2.2. index Generate Operator index for SQLite database format container images from pre-existing Operator bundles. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see "Additional resources". Command syntax USD opm index <subcommand> [<flags>] Table 7.5. index subcommands Subcommand Description add Add Operator bundles to an index. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 7.2.2.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 7.6. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 7.2.2.2. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 7.7. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.3. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 7.8. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.4. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 7.9. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. Additional resources Operator Framework packaging format Managing custom catalogs Mirroring images for a disconnected installation using the oc-mirror plugin 7.2.3. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 7.10. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 7.2.4. migrate Migrate a SQLite database format index image or database file to a file-based catalog. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Command syntax USD opm migrate <index_ref> <output_dir> [<flags>] Table 7.11. migrate flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.5. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 7.12. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.6. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 7.13. serve flags Flag Description --cache-dir (string) If this flag is set, it syncs and persists the server cache directory. --cache-enforce-integrity Exits with an error if the cache is not present or is invalidated. The default value is true when the --cache-dir flag is set and the --cache-only flag is false . Otherwise, the default is false . --cache-only Syncs the serve cache and exits without serving. --debug Enables debug logging. h , --help Help for serve. -p , --port (string) The port number for the service. The default value is 50051 . --pprof-addr (string) The address of the startup profiling endpoint. The format is Addr:Port . -t , --termination-log (string) The path to a container termination log file. The default value is /dev/termination-log . 7.2.7. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>] | [
"tar xvf <file>",
"echo USDPATH",
"sudo mv ./opm /usr/local/bin/",
"C:\\> path",
"C:\\> move opm.exe <directory>",
"opm version",
"opm <command> [<subcommand>] [<argument>] [<flags>]",
"opm generate <subcommand> [<flags>]",
"opm generate dockerfile <dcRootDir> [<flags>]",
"opm index <subcommand> [<flags>]",
"opm index add [<flags>]",
"opm index prune [<flags>]",
"opm index prune-stranded [<flags>]",
"opm index rm [<flags>]",
"opm init <package_name> [<flags>]",
"opm migrate <index_ref> <output_dir> [<flags>]",
"opm render <index_image | bundle_image | sqlite_file> [<flags>]",
"opm serve <source_path> [<flags>]",
"opm validate <directory> [<flags>]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cli_tools/opm-cli |
Chapter 89. ImageStreamOutput schema reference | Chapter 89. ImageStreamOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the ImageStreamOutput type from DockerOutput . It must have the value imagestream for the type ImageStreamOutput . Property Property type Description image string The name and tag of the ImageStream where the newly built image will be pushed. For example my-custom-connect:latest . Required. type string Must be imagestream . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ImageStreamOutput-reference |
Chapter 13. Setting throughput and storage limits on brokers | Chapter 13. Setting throughput and storage limits on brokers Important This feature is a technology preview and not intended for a production environment. For more information see the release notes . This procedure describes how to set throughput and storage limits on brokers in your Kafka cluster. Enable the Strimzi Quotas plugin and configure limits using quota properties The plugin provides storage utilization quotas and dynamic distribution of throughput limits. Storage quotas throttle Kafka producers based on disk storage utilization. Limits can be specified in bytes ( storage.per.volume.limit.min.available.bytes ) or percentage ( storage.per.volume.limit.min.available.ratio ) of available disk space, applying to each disk individually. When any broker in the cluster exceeds the configured disk threshold, clients are throttled to prevent disks from filling up too quickly and exceeding capacity. A total throughput limit is distributed dynamically across all clients. For example, if you set a 40 MBps producer byte-rate threshold, the distribution across two producers is not static. If one producer is using 10 MBps, the other can use up to 30 MBps. Specific users (clients) can be excluded from the restrictions. Note With the plugin, you see only aggregated quota metrics, not per-client metrics. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit the Kafka configuration properties file. Example plugin configuration # ... client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.per.volume.limit.min.available.bytes=500000000000 4 client.quota.callback.static.storage.check-interval=5 5 client.quota.callback.static.kafka.admin.bootstrap.servers=localhost:9092 6 client.quota.callback.static.excluded.principal.name.list=User:my-user-1;User:my-user-2 7 # ... 1 Loads the plugin. 2 Sets the producer byte-rate threshold of 1 MBps. 3 Sets the consumer byte-rate threshold. 1 MBps. 4 Sets an available bytes limit of 500 GB. 5 Sets the interval in seconds between checks on storage to 5 seconds. The default is 60 seconds. Set this property to 0 to disable the check. 6 Kafka cluster bootstrap servers address. This property is required if storage.check-interval is >0. All configuration properties starting with client.quota.callback.static.kafka.admin. prefix are passed to the Kafka Admin client configuration. 7 Excludes my-user-1 and my-user-2 from the restrictions. Each principal must be be prefixed with User: . storage.per.volume.limit.min.available.bytes and storage.per.volume.limit.min.available.ratio are mutually exclusive. Only configure one of these parameters. Note The full list of supported configuration properties can be found in the plugin documentation . Start the Kafka broker with the default configuration file. ./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka | [
"client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.per.volume.limit.min.available.bytes=500000000000 4 client.quota.callback.static.storage.check-interval=5 5 client.quota.callback.static.kafka.admin.bootstrap.servers=localhost:9092 6 client.quota.callback.static.excluded.principal.name.list=User:my-user-1;User:my-user-2 7",
"./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties",
"jcmd | grep Kafka"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/proc-setting-broker-limits-str |
Chapter 9. Assuming an AWS IAM role for a service account | Chapter 9. Assuming an AWS IAM role for a service account In Red Hat OpenShift Service on AWS clusters that use the AWS Security Token Service (STS), the OpenShift API server can be enabled to project signed service account tokens that can be used to assume an AWS Identity and Access Management (IAM) role in a pod. If the assumed IAM role has the required AWS permissions, the pods can authenticate against the AWS API using temporary STS credentials to perform AWS operations. You can use the pod identity webhook to project service account tokens to assume an AWS Identity and Access Management (IAM) role for your own workloads. If the assumed IAM role has the required AWS permissions, the pods can run AWS SDK operations by using temporary STS credentials. 9.1. How service accounts assume AWS IAM roles in SRE owned projects When you install a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS), cluster-specific Operator AWS Identity and Access Management (IAM) roles are created. These IAM roles permit the Red Hat OpenShift Service on AWS cluster Operators to run core OpenShift functionality. Cluster Operators use service accounts to assume IAM roles. When a service account assumes an IAM role, temporary STS credentials are provided for the service account to use in the cluster Operator's pod. If the assumed role has the necessary AWS privileges, the service account can run AWS SDK operations in the pod. Workflow for assuming AWS IAM roles in SRE owned projects The following diagram illustrates the workflow for assuming AWS IAM roles in SRE owned projects: Figure 9.1. Workflow for assuming AWS IAM roles in SRE owned projects The workflow has the following stages: Within each project that a cluster Operator runs, the Operator's deployment spec has a volume mount for the projected service account token, and a secret containing AWS credential configuration for the pod. The token is audience-bound and time-bound. Every hour, Red Hat OpenShift Service on AWS generates a new token, and the AWS SDK reads the mounted secret containing the AWS credential configuration. This configuration has a path to the mounted token and the AWS IAM Role ARN. The secret's credential configuration includes the following: An USDAWS_ARN_ROLE variable that has the ARN for the IAM role that has the permissions required to run AWS SDK operations. An USDAWS_WEB_IDENTITY_TOKEN_FILE variable that has the full path in the pod to the OpenID Connect (OIDC) token for the service account. The full path is /var/run/secrets/openshift/serviceaccount/token . When a cluster Operator needs to assume an AWS IAM role to access an AWS service (such as EC2), the AWS SDK client code running on the Operator invokes the AssumeRoleWithWebIdentity API call. The OIDC token is passed from the pod to the OIDC provider. The provider authenticates the service account identity if the following requirements are met: The identity signature is valid and signed by the private key. The sts.amazonaws.com audience is listed in the OIDC token and matches the audience configured in the OIDC provider. Note In Red Hat OpenShift Service on AWS with STS clusters, the OIDC provider is created during install and set as the service account issuer by default. The sts.amazonaws.com audience is set by default in the OIDC provider. The OIDC token has not expired. The issuer value in the token has the URL for the OIDC provider. If the project and service account are in the scope of the trust policy for the IAM role that is being assumed, then authorization succeeds. After successful authentication and authorization, temporary AWS STS credentials in the form of an AWS access token, secret key, and session token are passed to the pod for use by the service account. By using the credentials, the service account is temporarily granted the AWS permissions enabled in the IAM role. When the cluster Operator runs, the Operator that is using the AWS SDK in the pod consumes the secret that has the path to the projected service account and AWS IAM Role ARN to authenticate against the OIDC provider. The OIDC provider returns temporary STS credentials for authentication against the AWS API. 9.2. How service accounts assume AWS IAM roles in user-defined projects When you install a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS), pod identity webhook resources are included by default. You can use the pod identity webhook to enable a service account in a user-defined project to assume an AWS Identity and Access Management (IAM) role in a pod in the same project. When the IAM role is assumed, temporary STS credentials are provided for use by the service account in the pod. If the assumed role has the necessary AWS privileges, the service account can run AWS SDK operations in the pod. To enable the pod identity webhook for a pod, you must create a service account with an eks.amazonaws.com/role-arn annotation in your project. The annotation must reference the Amazon Resource Name (ARN) of the AWS IAM role that you want the service account to assume. You must also reference the service account in your Pod specification and deploy the pod in the same project as the service account. Pod identity webhook workflow in user-defined projects The following diagram illustrates the pod identity webhook workflow in user-defined projects: Figure 9.2. Pod identity webhook workflow in user-defined projects The workflow has the following stages: Within a user-defined project, a user creates a service account that includes an eks.amazonaws.com/role-arn annotation. The annotation points to the ARN of the AWS IAM role that you want your service account to assume. When a pod is deployed in the same project using a configuration that references the annotated service account, the pod identity webhook mutates the pod. The mutation injects the following components into the pod without the need to specify them in your Pod or Deployment resource configurations: An USDAWS_ARN_ROLE environment variable that contains the ARN for the IAM role that has the permissions required to run AWS SDK operations. An USDAWS_WEB_IDENTITY_TOKEN_FILE environment variable that contains the full path in the pod to the OpenID Connect (OIDC) token for the service account. The full path is /var/run/secrets/eks.amazonaws.com/serviceaccount/token . An aws-iam-token volume mounted on the mount point /var/run/secrets/eks.amazonaws.com/serviceaccount . An OIDC token file named token is contained in the volume. The OIDC token is passed from the pod to the OIDC provider. The provider authenticates the service account identity if the following requirements are met: The identity signature is valid and signed by the private key. The sts.amazonaws.com audience is listed in the OIDC token and matches the audience configured in the OIDC provider. Note The pod identity webhook applies the sts.amazonaws.com audience to the OIDC token by default. In Red Hat OpenShift Service on AWS with STS clusters, the OIDC provider is created during install and set as the service account issuer by default. The sts.amazonaws.com audience is set by default in the OIDC provider. The OIDC token has not expired. The issuer value in the token contains the URL for the OIDC provider. If the project and service account are in the scope of the trust policy for the IAM role that is being assumed, then authorization succeeds. After successful authentication and authorization, temporary AWS STS credentials in the form of a session token are passed to the pod for use by the service account. By using the credentials, the service account is temporarily granted the AWS permissions enabled in the IAM role. When you run AWS SDK operations in the pod, the service account provides the temporary STS credentials to the AWS API to verify its identity. 9.3. Assuming an AWS IAM role in your own pods Follow the procedures in this section to enable a service account to assume an AWS Identity and Access Management (IAM) role in a pod deployed in a user-defined project. You can create the required resources, including an AWS IAM role, a service account, a container image that includes an AWS SDK, and a pod deployed by using the image. In the example, the AWS Boto3 SDK for Python is used. You can also verify that the pod identity webhook mutates the AWS environment variables, the volume mount, and the token volume into your pod. Additionally, you can check that the service account assumes the AWS IAM role in your pod and can successfully run AWS SDK operations. 9.3.1. Setting up an AWS IAM role for a service account Create an AWS Identity and Access Management (IAM) role to be assumed by a service account in your Red Hat OpenShift Service on AWS cluster. Attach the permissions that are required by your service account to run AWS SDK operations in a pod. Prerequisites You have the permissions required to install and configure IAM roles in your AWS account. You have access to a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS). Admin-level user privileges are not required. You have the Amazon Resource Name (ARN) for the OpenID Connect (OIDC) provider that is configured as the service account issuer in your Red Hat OpenShift Service on AWS with STS cluster. Note In Red Hat OpenShift Service on AWS with STS clusters, the OIDC provider is created during install and set as the service account issuer by default. If you do not know the OIDC provider ARN, contact your cluster administrator. You have installed the AWS CLI ( aws ). Procedure Create a file named trust-policy.json with the following JSON configuration: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "<oidc_provider_arn>" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_provider_name>:sub": "system:serviceaccount:<project_name>:<service_account_name>" 2 } } } ] } 1 Replace <oidc_provider_arn> with the ARN of your OIDC provider, for example arn:aws:iam::<aws_account_id>:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/1v3r0n44npxu4g58so46aeohduomfres . 2 Limits the role to the specified project and service account. Replace <oidc_provider_name> with the name of your OIDC provider, for example rh-oidc.s3.us-east-1.amazonaws.com/1v3r0n44npxu4g58so46aeohduomfres . Replace <project_name>:<service_account_name> with your project name and service account name, for example my-project:test-service-account . Note Alternatively, you can limit the role to any service account within the specified project by using "<oidc_provider_name>:sub": "system:serviceaccount:<project_name>:*" . If you supply the * wildcard, you must replace StringEquals with StringLike in the preceding line. Create an AWS IAM role that uses the trust policy that is defined in the trust-policy.json file: USD aws iam create-role \ --role-name <aws_iam_role_name> \ 1 --assume-role-policy-document file://trust-policy.json 2 1 Replace <aws_iam_role_name> with the name of your IAM role, for example pod-identity-test-role . 2 References the trust-policy.json file that you created in the preceding step. Example output ROLE arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> 2022-09-28T12:03:17+00:00 / AQWMS3TB4Z2N3SH7675JK <aws_iam_role_name> ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:<project_name>:<service_account_name> PRINCIPAL <oidc_provider_arn> Retain the ARN for the role in the output. The format of the role ARN is arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> . Attach any managed AWS permissions that are required when the service account runs AWS SDK operations in your pod: USD aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess \ 1 --role-name <aws_iam_role_name> 2 1 The policy in this example adds read-only access permissions to the IAM role. 2 Replace <aws_iam_role_name> with the name of the IAM role that you created in the preceding step. Optional: Add custom attributes or a permissions boundary to the role. For more information, see Creating a role to delegate permissions to an AWS service in the AWS documentation. 9.3.2. Creating a service account in your project Add a service account in your user-defined project. Include an eks.amazonaws.com/role-arn annotation in the service account configuration that references the Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that you want the service account to assume. Prerequisites You have created an AWS IAM role for your service account. For more information, see Setting up an AWS IAM role for a service account . You have access to a Red Hat OpenShift Service on AWS with AWS Security Token Service (STS) cluster. Admin-level user privileges are not required. You have installed the OpenShift CLI ( oc ). Procedure In your Red Hat OpenShift Service on AWS cluster, create a project: USD oc new-project <project_name> 1 1 Replace <project_name> with the name of your project. The name must match the project name that you specified in your AWS IAM role configuration. Note You are automatically switched to the project when it is created. Create a file named test-service-account.yaml with the following service account configuration: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> 1 namespace: <project_name> 2 annotations: eks.amazonaws.com/role-arn: "<aws_iam_role_arn>" 3 1 Replace <service_account_name> with the name of your service account. The name must match the service account name that you specified in your AWS IAM role configuration. 2 Replace <project_name> with the name of your project. The name must match the project name that you specified in your AWS IAM role configuration. 3 Specifies the ARN of the AWS IAM role that the service account assumes for use within your pod. Replace <aws_iam_role_arn> with the ARN for the AWS IAM role that you created for your service account. The format of the role ARN is arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> . Create the service account in your project: USD oc create -f test-service-account.yaml Example output serviceaccount/<service_account_name> created Review the details of the service account: USD oc describe serviceaccount <service_account_name> 1 1 Replace <service_account_name> with the name of your service account. Example output Name: <service_account_name> 1 Namespace: <project_name> 2 Labels: <none> Annotations: eks.amazonaws.com/role-arn: <aws_iam_role_arn> 3 Image pull secrets: <service_account_name>-dockercfg-rnjkq Mountable secrets: <service_account_name>-dockercfg-rnjkq Tokens: <service_account_name>-token-4gbjp Events: <none> 1 Specifies the name of the service account. 2 Specifies the project that contains the service account. 3 Lists the annotation for the ARN of the AWS IAM role that the service account assumes. 9.3.3. Creating an example AWS SDK container image The steps in this procedure provide an example method to create a container image that includes an AWS SDK. The example steps use Podman to create the container image and Quay.io to host the image. For more information about Quay.io, see Getting Started with Quay.io . The container image can be used to deploy pods that can run AWS SDK operations. Note In this example procedure, the AWS Boto3 SDK for Python is installed into a container image. For more information about installing and using the AWS Boto3 SDK, see the AWS Boto3 documentation . For details about other AWS SDKs, see AWS SDKs and Tools Reference Guide in the AWS documentation. Prerequisites You have installed Podman on your installation host. You have a Quay.io user account. Procedure Add the following configuration to a file named Containerfile : FROM ubi9/ubi 1 RUN dnf makecache && dnf install -y python3-pip && dnf clean all && pip3 install boto3>=1.15.0 2 1 Specifies the Red Hat Universal Base Image version 9. 2 Installs the AWS Boto3 SDK by using the pip package management system. In this example, AWS Boto3 SDK version 1.15.0 or later is installed. From the directory that contains the file, build a container image named awsboto3sdk : USD podman build -t awsboto3sdk . Log in to Quay.io: USD podman login quay.io Tag the image in preparation for the upload to Quay.io: USD podman tag localhost/awsboto3sdk quay.io/<quay_username>/awsboto3sdk:latest 1 1 Replace <quay_username> with your Quay.io username. Push the tagged container image to Quay.io: USD podman push quay.io/<quay_username>/awsboto3sdk:latest 1 1 Replace <quay_username> with your Quay.io username. Make the Quay.io repository that contains the image public. This publishes the image so that it can be used to deploy a pod in your Red Hat OpenShift Service on AWS cluster: On https://quay.io/ , navigate to the Repository Settings page for repository that contains the image. Click Make Public to make the repository publicly available. 9.3.4. Deploying a pod that includes an AWS SDK Deploy a pod in a user-defined project from a container image that includes an AWS SDK. In your pod configuration, specify the service account that includes the eks.amazonaws.com/role-arn annotation. With the service account reference in place for your pod, the pod identity webhook injects the AWS environment variables, the volume mount, and the token volume into your pod. The pod mutation enables the service account to automatically assume the AWS IAM role in the pod. Prerequisites You have created an AWS Identity and Access Management (IAM) role for your service account. For more information, see Setting up an AWS IAM role for a service account . You have access to a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS). Admin-level user privileges are not required. You have installed the OpenShift CLI ( oc ). You have created a service account in your project that includes an eks.amazonaws.com/role-arn annotation that references the Amazon Resource Name (ARN) for the IAM role that you want the service account to assume. You have a container image that includes an AWS SDK and the image is available to your cluster. For detailed steps, see Creating an example AWS SDK container image . Note In this example procedure, the AWS Boto3 SDK for Python is used. For more information about installing and using the AWS Boto3 SDK, see the AWS Boto3 documentation . For details about other AWS SDKs, see AWS SDKs and Tools Reference Guide in the AWS documentation. Procedure Create a file named awsboto3sdk-pod.yaml with the following pod configuration: apiVersion: v1 kind: Pod metadata: namespace: <project_name> 1 name: awsboto3sdk 2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccountName: <service_account_name> 3 containers: - name: awsboto3sdk image: quay.io/<quay_username>/awsboto3sdk:latest 4 command: - /bin/bash - "-c" - "sleep 100000" 5 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] terminationGracePeriodSeconds: 0 restartPolicy: Never 1 Replace <project_name> with the name of your project. The name must match the project name that you specified in your AWS IAM role configuration. 2 Specifies the name of the pod. 3 Replace <service_account_name> with the name of the service account that is configured to assume the AWS IAM role. The name must match the service account name that you specified in your AWS IAM role configuration. 4 Specifies the location of your awsboto3sdk container image. Replace <quay_username> with your Quay.io username. 5 In this example pod configuration, this line keeps the pod running for 100000 seconds to enable verification testing in the pod directly. For detailed verification steps, see Verifying the assumed IAM role in your pod . Deploy an awsboto3sdk pod: USD oc create -f awsboto3sdk-pod.yaml Example output pod/awsboto3sdk created 9.3.5. Verifying the assumed IAM role in your pod After deploying an awsboto3sdk pod in your project, verify that the pod identity webhook has mutated the pod. Check that the required AWS environment variables, volume mount, and OIDC token volume are present within the pod. You can also verify that the service account assumes the AWS Identity and Access Management (IAM) role for your AWS account when you run AWS SDK operations in the pod. Prerequisites You have created an AWS IAM role for your service account. For more information, see Setting up an AWS IAM role for a service account . You have access to a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS). Admin-level user privileges are not required. You have installed the OpenShift CLI ( oc ). You have created a service account in your project that includes an eks.amazonaws.com/role-arn annotation that references the Amazon Resource Name (ARN) for the IAM role that you want the service account to assume. You have deployed a pod in your user-defined project that includes an AWS SDK. The pod references the service account that uses the pod identity webhook to assume the AWS IAM role required to run the AWS SDK operations. For detailed steps, see Deploying a pod that includes an AWS SDK . Note In this example procedure, a pod that includes the AWS Boto3 SDK for Python is used. For more information about installing and using the AWS Boto3 SDK, see the AWS Boto3 documentation . For details about other AWS SDKs, see AWS SDKs and Tools Reference Guide in the AWS documentation. Procedure Verify that the AWS environment variables, the volume mount, and the OIDC token volume are listed in the description of the deployed awsboto3sdk pod: USD oc describe pod awsboto3sdk Example output Name: awsboto3sdk Namespace: <project_name> ... Containers: awsboto3sdk: ... Environment: AWS_ROLE_ARN: <aws_iam_role_arn> 1 AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token 2 Mounts: /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro) 3 ... Volumes: aws-iam-token: 4 Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 86400 ... 1 Lists the AWS_ROLE_ARN environment variable that was injected into the pod by the pod identity webhook. The variable contains the ARN of the AWS IAM role to be assumed by the service account. 2 Lists the AWS_WEB_IDENTITY_TOKEN_FILE environment variable that was injected into the pod by the pod identity webhook. The variable contains the full path of the OIDC token that is used to verify the service account identity. 3 Lists the volume mount that was injected into the pod by the pod identity webhook. 4 Lists the aws-iam-token volume that is mounted onto the /var/run/secrets/eks.amazonaws.com/serviceaccount mount point. The volume contains the OIDC token that is used to authenticate the service account to assume the AWS IAM role. Start an interactive terminal in the awsboto3sdk pod: USD oc exec -ti awsboto3sdk -- /bin/sh In the interactive terminal for the pod, verify that the USDAWS_ROLE_ARN environment variable was mutated into the pod by the pod identity webhook: USD echo USDAWS_ROLE_ARN Example output arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> 1 1 The output must specify the ARN for the AWS IAM role that has the permissions required to run AWS SDK operations. In the interactive terminal for the pod, verify that the USDAWS_WEB_IDENTITY_TOKEN_FILE environment variable was mutated into the pod by the pod identity webhook: USD echo USDAWS_WEB_IDENTITY_TOKEN_FILE Example output /var/run/secrets/eks.amazonaws.com/serviceaccount/token 1 1 The output must specify the full path in the pod to the OIDC token for the service account. In the interactive terminal for the pod, verify that the aws-iam-token volume mount containing the OIDC token file was mounted by the pod identity webhook: USD mount | grep -is 'eks.amazonaws.com' Example output tmpfs on /run/secrets/eks.amazonaws.com/serviceaccount type tmpfs (ro,relatime,seclabel,size=13376888k) In the interactive terminal for the pod, verify that an OIDC token file named token is present on the /var/run/secrets/eks.amazonaws.com/serviceaccount/ mount point: USD ls /var/run/secrets/eks.amazonaws.com/serviceaccount/token Example output /var/run/secrets/eks.amazonaws.com/serviceaccount/token 1 1 The OIDC token file in the aws-iam-token volume that was mounted in the pod by the pod identity webhook. The token is used to authenticate the identity of the service account in AWS. In the pod, verify that AWS Boto3 SDK operations run successfully: In the interactive terminal for the pod, start a Python 3 shell: USD python3 In the Python 3 shell, import the boto3 module: >>> import boto3 Create a variable that includes the Boto3 s3 service resource: >>> s3 = boto3.resource('s3') Print the names of all of the S3 buckets in your AWS account: >>> for bucket in s3.buckets.all(): ... print(bucket.name) ... Example output <bucket_name> <bucket_name> <bucket_name> ... If the service account successfully assumed the AWS IAM role, the output lists all of the S3 buckets that are available in your AWS account. 9.4. Additional resources For more information about using AWS IAM roles with service accounts, see IAM roles for service accounts in the AWS documentation. For information about AWS IAM role delegation, see Creating a role to delegate permissions to an AWS service in the AWS documentation. For details about AWS SDKs, see AWS SDKs and Tools Reference Guide in the AWS documentation. For more information about installing and using the AWS Boto3 SDK for Python, see the AWS Boto3 documentation . For general information about webhook admission plugins for OpenShift, see Webhook admission plugins in the OpenShift Container Platform documentation. | [
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_provider_arn>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<oidc_provider_name>:sub\": \"system:serviceaccount:<project_name>:<service_account_name>\" 2 } } } ] }",
"aws iam create-role --role-name <aws_iam_role_name> \\ 1 --assume-role-policy-document file://trust-policy.json 2",
"ROLE arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> 2022-09-28T12:03:17+00:00 / AQWMS3TB4Z2N3SH7675JK <aws_iam_role_name> ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:<project_name>:<service_account_name> PRINCIPAL <oidc_provider_arn>",
"aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess \\ 1 --role-name <aws_iam_role_name> 2",
"oc new-project <project_name> 1",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> 1 namespace: <project_name> 2 annotations: eks.amazonaws.com/role-arn: \"<aws_iam_role_arn>\" 3",
"oc create -f test-service-account.yaml",
"serviceaccount/<service_account_name> created",
"oc describe serviceaccount <service_account_name> 1",
"Name: <service_account_name> 1 Namespace: <project_name> 2 Labels: <none> Annotations: eks.amazonaws.com/role-arn: <aws_iam_role_arn> 3 Image pull secrets: <service_account_name>-dockercfg-rnjkq Mountable secrets: <service_account_name>-dockercfg-rnjkq Tokens: <service_account_name>-token-4gbjp Events: <none>",
"FROM ubi9/ubi 1 RUN dnf makecache && dnf install -y python3-pip && dnf clean all && pip3 install boto3>=1.15.0 2",
"podman build -t awsboto3sdk .",
"podman login quay.io",
"podman tag localhost/awsboto3sdk quay.io/<quay_username>/awsboto3sdk:latest 1",
"podman push quay.io/<quay_username>/awsboto3sdk:latest 1",
"apiVersion: v1 kind: Pod metadata: namespace: <project_name> 1 name: awsboto3sdk 2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccountName: <service_account_name> 3 containers: - name: awsboto3sdk image: quay.io/<quay_username>/awsboto3sdk:latest 4 command: - /bin/bash - \"-c\" - \"sleep 100000\" 5 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] terminationGracePeriodSeconds: 0 restartPolicy: Never",
"oc create -f awsboto3sdk-pod.yaml",
"pod/awsboto3sdk created",
"oc describe pod awsboto3sdk",
"Name: awsboto3sdk Namespace: <project_name> Containers: awsboto3sdk: Environment: AWS_ROLE_ARN: <aws_iam_role_arn> 1 AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token 2 Mounts: /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro) 3 Volumes: aws-iam-token: 4 Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 86400",
"oc exec -ti awsboto3sdk -- /bin/sh",
"echo USDAWS_ROLE_ARN",
"arn:aws:iam::<aws_account_id>:role/<aws_iam_role_name> 1",
"echo USDAWS_WEB_IDENTITY_TOKEN_FILE",
"/var/run/secrets/eks.amazonaws.com/serviceaccount/token 1",
"mount | grep -is 'eks.amazonaws.com'",
"tmpfs on /run/secrets/eks.amazonaws.com/serviceaccount type tmpfs (ro,relatime,seclabel,size=13376888k)",
"ls /var/run/secrets/eks.amazonaws.com/serviceaccount/token",
"/var/run/secrets/eks.amazonaws.com/serviceaccount/token 1",
"python3",
">>> import boto3",
">>> s3 = boto3.resource('s3')",
">>> for bucket in s3.buckets.all(): ... print(bucket.name)",
"<bucket_name> <bucket_name> <bucket_name>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/assuming-an-aws-iam-role-for-a-service-account |
Chapter 6. Custom image builds with Buildah | Chapter 6. Custom image builds with Buildah With OpenShift Container Platform 4.14, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image. If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah. Note Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster. 6.1. Prerequisites Review how to grant custom build permissions . 6.2. Creating custom build artifacts You must create the image you want to use as your custom build image. Procedure Starting with an empty directory, create a file named Dockerfile with the following content: FROM registry.redhat.io/rhel8/buildah # In this example, `/tmp/build` contains the inputs that build when this # custom builder image is run. Normally the custom builder image fetches # this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh # /usr/bin/build.sh contains the actual custom build logic that will be run when # this custom builder image is run. ENTRYPOINT ["/usr/bin/build.sh"] In the same directory, create a file named dockerfile.sample . This file is included in the custom build image and defines the image that is produced by the custom build: FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build In the same directory, create a file named build.sh . This file contains the logic that is run when the custom build runs: #!/bin/sh # Note that in this case the build inputs are part of the custom builder image, but normally this # is retrieved from an external source. cd /tmp/input # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom # build framework TAG="USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}" # performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . # buildah requires a slight modification to the push secret provided by the service # account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg # push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG} 6.3. Build custom builder image You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy. Prerequisites Define all the inputs that will go into creating your new custom builder image. Procedure Define a BuildConfig object that will build your custom builder image: USD oc new-build --binary --strategy=docker --name custom-builder-image From the directory in which you created your custom build image, run the build: USD oc start-build custom-builder-image --from-dir . -F After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest . 6.4. Use custom builder image You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic. Prerequisites Define all the required inputs for new custom builder image. Build your custom builder image. Procedure Create a file named buildconfig.yaml . This file defines the BuildConfig object that is created in your project and executed: kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest 1 Specify your project name. Create the BuildConfig : USD oc create -f buildconfig.yaml Create a file named imagestream.yaml . This file defines the image stream to which the build will push the image: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {} Create the imagestream: USD oc create -f imagestream.yaml Run your custom build: USD oc start-build sample-custom-build -F When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream . | [
"FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]",
"FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build",
"#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}",
"oc new-build --binary --strategy=docker --name custom-builder-image",
"oc start-build custom-builder-image --from-dir . -F",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest",
"oc create -f buildconfig.yaml",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}",
"oc create -f imagestream.yaml",
"oc start-build sample-custom-build -F"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/custom-builds-buildah |
10.3. Plymouth | 10.3. Plymouth Plymouth is a graphical boot system and logger for Red Hat Enterprise Linux 7, which makes use of the kernel-based mode setting (KMS) and Direct Rendering Manager (DRM). Plymouth also handles user interaction during boot. You can customize the boot screen appearance by choosing from various static or animated graphical themes. New themes can be created based on the existing ones. 10.3.1. Branding the Theme Each theme for Plymouth is composed of a theme data file and a compiled splash plugin module . The data file has a .plymouth extension, and is installed in the /usr/share/plymouth/themes/ directory. The configuration data is specified under the [Plymouth Theme] section, in the key-value format. Valid keys for this group are Name , Description , and ModuleName . While the first two keys are self-explanatory, the third specifies the name of a Plymouth splash plugin module. Different plugins provide different animations at boot time and the underlying implementation of the various themes: Example 10.2. A .plymouth File Specimen Procedure 10.3. Changing the Plymouth Theme Search for the existing Plymouth themes and choose the most preferable one. Run the following command: Or run the plymouth-set-default-theme --list command to view the installed themes. You can also install all the themes when installing all the plymouth packages. However, you will install a number of unnecessary packages as well. Set the new theme as default with the plymouth-set-default-theme theme_name command. Example 10.3. Set "spinfinity" as the Default Theme You have chosen the spinfinity theme, so you run: Rebuild the initrd daemon after editing otherwise your theme will not show in the boot screen. Do so by running: 10.3.2. Creating a New Plymouth Theme If you do not want to choose from the given list of themes, you can create your own. The easiest way is to copy an existing theme and modify it. Procedure 10.4. Creating Your Own Theme from an Existing Theme Copy an entire content of a plymouth/ directory. As a template directory, use, for example, the default theme for Red Hat Enterprise Linux 7, /usr/share/plymouth/themes/charge/charge.plymouth , which uses a two-step splash plugin ( two-step is a popular boot load feature of a two phased boot process that starts with a progressing animation synced to boot time and finishes with a short, fast one-shot animation): Save the charge.plymouth file with a new name in the /usr/share/plymouth/themes/ newtheme / directory, in the following format: Update the settings in your /usr/share/plymouth/themes/ newtheme / newtheme .plymouth file according to your preferences, changing color, alignment, or transition. Set your newtheme as default by running the following command: Rebuild the initrd daemon after changing the theme by running the command below: 10.3.2.1. Using Branded Logo Some of the plugins show a branded logo as part of the splash animation. If you wish to add your own logo into your theme, follow the short procedure below. Important Keep in mind that the image format of your branded logo must be of the .png format. Procedure 10.5. Add Your Logo to the Theme Create an image file named logo.png with your logo. Edit the /usr/share/plymouth/themes/ newtheme .plymouth file by updating the ImageDir key to point to the directory with the logo.png image file you created in step 1: For more information on Plymouth , see the plymouth (8) man page. | [
"[Plymouth Theme] Name=Charge Description=A theme that features the shadowy hull of my logo charge up and finally burst into full form. ModuleName=two-step",
"yum search plymouth-theme",
"yum install plymouth\\*",
"plymouth-set-default-theme spinfinity",
"dracut -f",
"[Plymouth Theme] Name=Charge Description=A theme that features the shadowy hull of my logo charge up and finally burst into full form. ModuleName=two-step [two-step] ImageDir=/usr/share/plymouth/themes/charge HorizontalAlignment=.5 VerticalAlignment=.5 Transition=none TransitionDuration=0.0 BackgroundStartColor=0x202020 BackgroundEndColor=0x202020",
"newtheme .plymouth",
"plymouth-set-default-theme newtheme",
"dracut -f",
"ImageDir=/usr/share/plymouth/themes/ newtheme"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/plymouth |
Chapter 2. Configuring your firewall | Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS 2.2. OpenShift Container Platform network flow matrix The following network flow matrixes describe the ingress flows to OpenShift Container Platform services for the following environments: OpenShift Container Platform on bare metal Single-node OpenShift on bare metal OpenShift Container Platform on Amazon Web Services (AWS) Single-node OpenShift on AWS Use the information in the appropriate network flow matrix to help you manage ingress traffic for your specific environment. You can restrict ingress traffic to essential flows to improve network security. Additionally, consider the following dynamic port ranges when managing ingress traffic for both bare metal and cloud environments: 9000-9999 : Host level services 30000-32767 : Kubernetes node ports 49152-65535 : Dynamic or private ports To view or download the complete raw CSV content for an environment, see the following resources: OpenShift Container Platform on bare metal Single-node OpenShift on bare metal OpenShift Container Platform on AWS Single-node OpenShift on AWS Note The network flow matrixes describe ingress traffic flows for a base OpenShift Container Platform or single-node OpenShift installation. It does not describe network flows for additional components, such as optional Operators available from the Red Hat Marketplace. The matrixes do not apply for hosted control planes, Red Hat build of MicroShift, or standalone clusters. 2.2.1. Base network flows The following matrixes describe the base ingress flows to OpenShift Container Platform services. Note For base ingress flows to single-node OpenShift clusters, see the Control plane node base flows matrix only. Table 2.1. Control plane node base flows Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 22 Host system service sshd master TRUE Ingress TCP 111 Host system service rpcbind master TRUE Ingress TCP 2379 openshift-etcd etcd etcd etcdctl master FALSE Ingress TCP 2380 openshift-etcd healthz etcd etcd master FALSE Ingress TCP 6080 openshift-kube-apiserver kube-apiserver kube-apiserver-insecure-readyz master FALSE Ingress TCP 6443 openshift-kube-apiserver apiserver kube-apiserver kube-apiserver master FALSE Ingress TCP 8080 openshift-network-operator network-operator network-operator master FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon master FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy master FALSE Ingress TCP 9099 openshift-cluster-version cluster-version-operator cluster-version-operator cluster-version-operator master FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy master FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node master FALSE Ingress TCP 9104 openshift-network-operator metrics network-operator network-operator master FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics master FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller master FALSE Ingress TCP 9108 openshift-ovn-kubernetes ovn-kubernetes-control-plane ovnkube-control-plane kube-rbac-proxy master FALSE Ingress TCP 9192 openshift-cluster-machine-approver machine-approver machine-approver kube-rbac-proxy master FALSE Ingress TCP 9258 openshift-cloud-controller-manager-operator machine-approver cluster-cloud-controller-manager cluster-cloud-controller-manager master FALSE Ingress TCP 9537 Host system service crio-metrics master FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio master FALSE Ingress TCP 9978 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9979 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9980 openshift-etcd etcd etcd etcd master FALSE Ingress TCP 10250 Host system service kubelet master FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller master FALSE Ingress TCP 10257 openshift-kube-controller-manager kube-controller-manager kube-controller-manager kube-controller-manager master FALSE Ingress TCP 10259 openshift-kube-scheduler scheduler openshift-kube-scheduler kube-scheduler master FALSE Ingress TCP 10357 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 17697 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 22623 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress TCP 22624 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress UDP 111 Host system service rpcbind master TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve master FALSE Table 2.2. Worker node base flows Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 22 Host system service sshd worker TRUE Ingress TCP 111 Host system service rpcbind worker TRUE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon worker FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy worker FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy worker FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node worker FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics worker FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller worker FALSE Ingress TCP 9537 Host system service crio-metrics worker FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio worker FALSE Ingress TCP 10250 Host system service kubelet worker FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller worker FALSE Ingress UDP 111 Host system service rpcbind worker TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve worker FALSE 2.2.2. Additional network flows for OpenShift Container Platform on bare metal In addition to the base network flows, the following matrix describes the ingress flows to OpenShift Container Platform services that are specific to OpenShift Container Platform on bare metal. Table 2.3. OpenShift Container Platform on bare metal Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress TCP 5050 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 6180 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6183 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6385 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6388 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 9444 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9445 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9447 openshift-machine-api metal3-baremetal-operator master FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns master FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress TCP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress TCP 80 openshift-ingress router-internal-default router-default router worker FALSE Ingress TCP 443 openshift-ingress router-internal-default router-default router worker FALSE Ingress TCP 1936 openshift-ingress router-internal-default router-default router worker FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns worker FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns worker FALSE 2.2.3. Additional network flows for single-node OpenShift on bare metal In addition to the base network flows, the following matrix describes the ingress flows to OpenShift Container Platform services that are specific to single-node OpenShift on bare metal. Table 2.4. Single-node OpenShift on bare metal Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 80 openshift-ingress router-internal-default router-default router master FALSE Ingress TCP 443 openshift-ingress router-internal-default router-default router master FALSE Ingress TCP 1936 openshift-ingress router-internal-default router-default router master FALSE Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE 2.2.4. Additional network flows for OpenShift Container Platform on AWS In addition to the base network flows, the following matrix describes the ingress flows to OpenShift Container Platform services that are specific to OpenShift Container Platform on AWS. Table 2.5. OpenShift Container Platform on AWS Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE Ingress TCP 80 openshift-ingress router-default router-default router worker FALSE Ingress TCP 443 openshift-ingress router-default router-default router worker FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver worker FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar worker FALSE 2.2.5. Additional network flows for single-node OpenShift on AWS In addition to the base network flows, the following matrix describes the ingress flows to OpenShift Container Platform services that are specific to single-node OpenShift on AWS. Table 2.6. Single-node OpenShift on AWS Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 80 openshift-ingress router-default router-default router master FALSE Ingress TCP 443 openshift-ingress router-default router-default router master FALSE Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_configuration/configuring-firewall |
Providing feedback on Workload Availability for Red Hat OpenShift documentation | Providing feedback on Workload Availability for Red Hat OpenShift documentation We appreciate your feedback on our documentation. Let us know how we can improve it. To do so: Go to the JIRA website. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Enter your username in the Reporter field. Enter the affected versions in the Affects Version/s field. Click Create at the bottom of the dialog. | null | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/25.1/html/release_notes/proc_providing-feedback-on-workload-availability-for-red-hat-openshift-documentation_preface |
8.11. NFS References | 8.11. NFS References Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in this chapter, are available for exporting or mounting NFS shares. For more information, see the following sources: Installed Documentation man mount - Contains a comprehensive look at mount options for both NFS server and client configurations. man fstab - Provides detail for the format of the /etc/fstab file used to mount file systems at boot-time. man nfs - Provides details on NFS-specific file system export and mount options. man exports - Shows common options used in the /etc/exports file when exporting NFS file systems. Useful Websites http://linux-nfs.org - The current site for developers where project status updates can be viewed. http://nfs.sourceforge.net/ - The old home for developers which still contains a lot of useful information. http://www.citi.umich.edu/projects/nfsv4/linux/ - An NFSv4 for Linux 2.6 kernel resource. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.4086 - An excellent whitepaper on the features and enhancements of the NFS Version 4 protocol. Related Books Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates - Makes an excellent reference guide for the many different NFS export and mount options available. NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company - Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-nfs-additional-resources |
A.14. OProfile | A.14. OProfile OProfile is a low overhead, system-wide performance monitoring tool provided by the oprofile package. It uses the performance monitoring hardware on the processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of second-level cache requests, and the number of hardware interrupts received. OProfile is also able to profile applications that run in a Java Virtual Machine (JVM). OProfile provides the following tools. Note that the legacy opcontrol tool and the new operf tool are mutually exclusive. ophelp Displays available events for the system's processor along with a brief description of each. opimport Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates annotated source for an executable if the application was compiled with debugging symbols. opcontrol Configures which data is collected in a profiling run. operf Intended to replace opcontrol . The operf tool uses the Linux Performance Events subsystem, allowing you to target your profiling more precisely, as a single process or system-wide, and allowing OProfile to co-exist better with other tools using the performance monitoring hardware on your system. Unlike opcontrol , no initial setup is required, and it can be used without the root privileges unless the --system-wide option is in use. opreport Retrieves profile data. oprofiled Runs as a daemon to periodically write sample data to disk. Legacy mode ( opcontrol , oprofiled , and post-processing tools) remains available, but is no longer the recommended profiling method. For further information about any of these commands, see the OProfile man page: | [
"man oprofile"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-oprofile |
Chapter 7. Expanding persistent volumes | Chapter 7. Expanding persistent volumes 7.1. Enabling volume expansion support Before you can expand persistent volumes, the StorageClass object must have the allowVolumeExpansion field set to true . Procedure Edit the StorageClass object and add the allowVolumeExpansion attribute by running the following command: USD oc edit storageclass <storage_class_name> 1 1 Specifies the name of the storage class. The following example demonstrates adding this line at the bottom of the storage class configuration. apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1 1 Setting this attribute to true allows PVCs to be expanded after creation. 7.2. Expanding CSI volumes You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created. CSI volume expansion does not support the following: Recovering from failure when expanding volumes Shrinking Prerequisites The underlying CSI driver supports resize. Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . For more information, see "Enabling volume expansion support." Procedure For the persistent volume claim (PVC), set .spec.resources.requests.storage to the desired new size. Watch the status.conditions field of the PVC to see if the resize has completed. OpenShift Container Platform adds the Resizing condition to the PVC during expansion, which is removed after expansion completes. 7.3. Expanding FlexVolume with a supported driver When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform. FlexVolume allows expansion if the driver is set with RequiresFSResize to true . The FlexVolume can be expanded on pod restart. Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. Prerequisites The underlying volume driver supports resize. The driver is set with the RequiresFSResize capability to true . Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . Procedure To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin interface using these methods: RequiresFSResize If true , updates the capacity directly. If false , calls the ExpandFS method to finish the filesystem resize. ExpandFS If true , calls ExpandFS to resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize. Important Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume. 7.4. Expanding local volumes You can manually expand persistent volumes (PVs) and persistent volume claims (PVCs) created by using the local storage operator (LSO). Procedure Expand the underlying devices. Ensure that appropriate capacity is available on these devices. Update the corresponding PV objects to match the new device sizes by editing the .spec.capacity field of the PV. For the storage class that is used for binding the PVC to PVet, set allowVolumeExpansion:true . For the PVC, set .spec.resources.requests.storage to match the new size. Kubelet should automatically expand the underlying file system on the volume, if necessary, and update the status field of the PVC to reflect the new size. 7.5. Expanding persistent volume claims (PVCs) with a file system Expanding PVCs based on volume types that need file system resizing, such as GCE, EBS, and Cinder, is a two-step process. First, expand the volume objects in the cloud provider. Second, expand the file system on the node. Expanding the file system on the node only happens when a new pod is started with the volume. Prerequisites The controlling StorageClass object must have allowVolumeExpansion set to true . Procedure Edit the PVC and request a new size by editing spec.resources.requests . For example, the following expands the ebs PVC to 8 Gi: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1 1 Updating spec.resources.requests to a larger amount expands the PVC. After the cloud provider object has finished resizing, the PVC is set to FileSystemResizePending . Check the condition by entering the following command: USD oc describe pvc <pvc_name> When the cloud provider object has finished resizing, the PersistentVolume object reflects the newly requested size in PersistentVolume.Spec.Capacity . At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the FileSystemResizePending condition is removed from the PVC. 7.6. Recovering from failure when expanding volumes If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller. Procedure Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain . Delete the PVC. Manually edit the PV and delete the claimRef entry from the PV specs to ensure that the newly created PVC can bind to the PV marked Retain . This marks the PV as Available . Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only. Restore the reclaim policy on the PV. Additional resources The controlling StorageClass object has allowVolumeExpansion set to true (see Enabling volume expansion support ). | [
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage/expanding-persistent-volumes |
6.0 Release Notes | 6.0 Release Notes Red Hat Enterprise Linux 6 Release Notes for Red Hat Enterprise Linux 6 Red Hat Engineering Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/index |
Chapter 8. High availability for hosted control planes | Chapter 8. High availability for hosted control planes 8.1. About high availability for hosted control planes You can maintain high availability (HA) of hosted control planes by implementing the following actions: Recover etcd members for a hosted cluster. Back up and restore etcd for a hosted cluster. Perform a disaster recovery process for a hosted cluster. 8.1.1. Impact of the failed management cluster component If the management cluster component fails, your workload remains unaffected. In the OpenShift Container Platform management cluster, the control plane is decoupled from the data plane to provide resiliency. The following table covers the impact of a failed management cluster component on the control plane and the data plane. However, the table does not cover all scenarios for the management cluster component failures. Table 8.1. Impact of the failed component on hosted control planes Name of the failed component Hosted control plane API status Hosted cluster data plane status Worker node Available Available Availability zone Available Available Management cluster control plane Available Available Management cluster control plane and worker nodes Not available Available 8.2. Recovering an unhealthy etcd cluster In a highly available control plane, three etcd pods run as a part of a stateful set in an etcd cluster. To recover an etcd cluster, identify unhealthy etcd pods by checking the etcd cluster health. 8.2.1. Checking the status of an etcd cluster You can check the status of the etcd cluster health by logging into any etcd pod. Procedure Log in to an etcd pod by entering the following command: USD oc rsh -n openshift-etcd -c etcd <etcd_pod_name> Print the health status of an etcd cluster by entering the following command: sh-4.4# etcdctl endpoint status -w table Example output +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.1xxx.20:2379 | 8fxxxxxxxxxx | 3.5.12 | 123 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.21:2379 | a5xxxxxxxxxx | 3.5.12 | 122 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.22:2379 | 7cxxxxxxxxxx | 3.5.12 | 124 MB | true | false | 10 | 180156 | 180156 | | +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ 8.2.2. Recovering a failing etcd pod Each etcd pod of a 3-node cluster has its own persistent volume claim (PVC) to store its data. An etcd pod might fail because of corrupted or missing data. You can recover a failing etcd pod and its PVC. Procedure To confirm that the etcd pod is failing, enter the following command: USD oc get pods -l app=etcd -n openshift-etcd Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m The failing etcd pod might have the CrashLoopBackOff or Error status. Delete the failing pod and its PVC by entering the following command: USD oc delete pods etcd-2 -n openshift-etcd Verification Verify that a new etcd pod is up and running by entering the following command: USD oc get pods -l app=etcd -n openshift-etcd Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s 8.3. Backing up and restoring etcd in an on-premise environment You can back up and restore etcd on a hosted cluster in an on-premise environment to fix failures. 8.3.1. Backing up and restoring etcd on a hosted cluster in an on-premise environment By backing up and restoring etcd on a hosted cluster, you can fix failures, such as corrupted or missing data in an etcd member of a three node cluster. If multiple members of the etcd cluster encounter data loss or have a CrashLoopBackOff status, this approach helps prevent an etcd quorum loss. Important This procedure requires API downtime. Prerequisites The oc and jq binaries have been installed. Procedure First, set up your environment variables: Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary: USD CLUSTER_NAME=my-cluster USD HOSTED_CLUSTER_NAMESPACE=clusters USD CONTROL_PLANE_NAMESPACE="USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}" Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary: USD oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} \ -p '{"spec":{"pausedUntil":"true"}}' --type=merge , take a snapshot of etcd by using one of the following methods: Use a previously backed-up snapshot of etcd. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Take a snapshot of the pod database and save it locally to your machine by entering the following commands: USD ETCD_POD=etcd-0 USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- \ env ETCDCTL_API=3 /usr/bin/etcdctl \ --cacert /etc/etcd/tls/etcd-ca/ca.crt \ --cert /etc/etcd/tls/client/etcd-client.crt \ --key /etc/etcd/tls/client/etcd-client.key \ --endpoints=https://localhost:2379 \ snapshot save /var/lib/snapshot.db Verify that the snapshot is successful by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- \ env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status \ /var/lib/snapshot.db Make a local copy of the snapshot by entering the following command: USD oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db \ /tmp/etcd.snapshot.db Make a copy of the snapshot database from etcd persistent storage: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Find a pod that is running and set its name as the value of ETCD_POD: ETCD_POD=etcd-0 , and then copy its snapshot database by entering the following command: USD oc cp -c etcd \ USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db \ /tmp/etcd.snapshot.db , scale down the etcd statefulset by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0 Delete volumes for second and third members by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2 Create a pod to access the first etcd member's data: Get the etcd image by entering the following command: USD ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd \ -o jsonpath='{ .spec.template.spec.containers[0].image }') Create a pod that allows access to etcd data: USD cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF Check the status of the etcd-data pod and wait for it to be running by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data Get the name of the etcd-data pod by entering the following command: USD DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers \ -l app=etcd-data -o name | cut -d/ -f2) Copy an etcd snapshot into the pod by entering the following command: USD oc cp /tmp/etcd.snapshot.db \ USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db Remove old data from the etcd-data pod by entering the following commands: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data Restore the etcd snapshot by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- \ etcdutl snapshot restore /var/lib/restored.snap.db \ --data-dir=/var/lib/data --skip-hash-check \ --name etcd-0 \ --initial-cluster-token=etcd-cluster \ --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 \ --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 Remove the temporary etcd snapshot from the pod by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- \ rm /var/lib/restored.snap.db Delete data access deployment by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data Scale up the etcd cluster by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3 Wait for the etcd member pods to return and report as available by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w Restore reconciliation of the hosted cluster by entering the following command: USD oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} \ -p '{"spec":{"pausedUntil":"null"}}' --type=merge 8.4. Backing up and restoring etcd on AWS You can back up and restore etcd on a hosted cluster on Amazon Web Services (AWS) to fix failures. 8.4.1. Taking a snapshot of etcd for a hosted cluster To back up etcd for a hosted cluster, you must take a snapshot of etcd. Later, you can restore etcd by using the snapshot. Important This procedure requires API downtime. Procedure Pause reconciliation of the hosted cluster by entering the following command: USD oc patch -n clusters hostedclusters/<hosted_cluster_name> \ -p '{"spec":{"pausedUntil":"true"}}' --type=merge Stop all etcd-writer deployments by entering the following command: USD oc scale deployment -n <hosted_cluster_namespace> --replicas=0 \ kube-apiserver openshift-apiserver openshift-oauth-apiserver To take an etcd snapshot, use the exec command in each etcd container by entering the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- \ env ETCDCTL_API=3 /usr/bin/etcdctl \ --cacert /etc/etcd/tls/etcd-ca/ca.crt \ --cert /etc/etcd/tls/client/etcd-client.crt \ --key /etc/etcd/tls/client/etcd-client.key \ --endpoints=localhost:2379 \ snapshot save /var/lib/data/snapshot.db To check the snapshot status, use the exec command in each etcd container by running the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- \ env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status \ /var/lib/data/snapshot.db Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket. See the following example. Note The following example uses signature version 2. If you are in a region that supports signature version 4, such as the us-east-2 region, use signature version 4. Otherwise, when copying the snapshot to an S3 bucket, the upload fails. Example BUCKET_NAME=somebucket CLUSTER_NAME=cluster_name FILEPATH="/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` HOSTED_CLUSTER_NAMESPACE=hosted_cluster_namespace oc exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db To restore the snapshot on a new cluster later, save the encryption secret that the hosted cluster references. Get the secret encryption key by entering the following command: USD oc get hostedcluster <hosted_cluster_name> \ -o=jsonpath='{.spec.secretEncryption.aescbc}' {"activeKey":{"name":"<hosted_cluster_name>-etcd-encryption-key"}} Save the secret encryption key by entering the following command: USD oc get secret <hosted_cluster_name>-etcd-encryption-key \ -o=jsonpath='{.data.key}' You can decrypt this key when restoring a snapshot on a new cluster. Restart all etcd-writer deployments by entering the following command: USD oc scale deployment -n <control_plane_namespace> --replicas=3 \ kube-apiserver openshift-apiserver openshift-oauth-apiserver Resume the reconciliation of the hosted cluster by entering the following command: USD oc patch -n <hosted_cluster_namespace> \ -p '[\{"op": "remove", "path": "/spec/pausedUntil"}]' --type=json steps Restore the etcd snapshot. 8.4.2. Restoring an etcd snapshot on a hosted cluster If you have a snapshot of etcd from your hosted cluster, you can restore it. Currently, you can restore an etcd snapshot only during cluster creation. To restore an etcd snapshot, you modify the output from the create cluster --render command and define a restoreSnapshotURL value in the etcd section of the HostedCluster specification. Note The --render flag in the hcp create command does not render the secrets. To render the secrets, you must use both the --render and the --render-sensitive flags in the hcp create command. Prerequisites You took an etcd snapshot on a hosted cluster. Procedure On the aws command-line interface (CLI), create a pre-signed URL so that you can download your etcd snapshot from S3 without passing credentials to the etcd deployment: ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT}) Modify the HostedCluster specification to refer to the URL: spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - "USD{ETCD_SNAPSHOT_URL}" managementType: Managed Ensure that the secret that you referenced from the spec.secretEncryption.aescbc value contains the same AES key that you saved in the steps. 8.5. Backing up and restoring a hosted cluster on OpenShift Virtualization You can back up and restore a hosted cluster on OpenShift Virtualization to fix failures. 8.5.1. Backing up a hosted cluster on OpenShift Virtualization When you back up a hosted cluster on OpenShift Virtualization, the hosted cluster can remain running. The backup contains the hosted control plane components and the etcd for the hosted cluster. When the hosted cluster is not running compute nodes on external infrastructure, hosted cluster workload data that is stored in persistent volume claims (PVCs) that are provisioned by KubeVirt CSI are also backed up. The backup does not contain any KubeVirt virtual machines (VMs) that are used as compute nodes. Those VMs are automatically re-created after the restore process is completed. Procedure Create a Velero backup resource by creating a YAML file that is similar to the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: hc-clusters-hosted-backup namespace: openshift-adp labels: velero.io/storage-location: default spec: includedNamespaces: 1 - clusters - clusters-hosted includedResources: - sa - role - rolebinding - deployment - statefulset - pv - pvc - bmh - configmap - infraenv - priorityclasses - pdb - hostedcluster - nodepool - secrets - hostedcontrolplane - cluster - datavolume - service - route excludedResources: [ ] labelSelector: 2 matchExpressions: - key: 'hypershift.openshift.io/is-kubevirt-rhcos' operator: 'DoesNotExist' storageLocation: default preserveNodePorts: true ttl: 4h0m0s snapshotMoveData: true 3 datamover: "velero" 4 defaultVolumesToFsBackup: false 5 1 This field selects the namespaces from the objects to back up. Include namespaces from both the hosted cluster and the hosted control plane. In this example, clusters is a namespace from the hosted cluster and clusters-hosted is a namespace from the hosted control plane. By default, the HostedControlPlane namespace is clusters-<hosted_cluster_name> . 2 The boot image of the VMs that are used as the hosted cluster nodes are stored in large PVCs. To reduce backup time and storage size, you can filter those PVCs out of the backup by adding this label selector. 3 This field and the datamover field enable automatically uploading the CSI VolumeSnapshots to remote cloud storage. 4 This field and the snapshotMoveData field enable automatically uploading the CSI VolumeSnapshots to remote cloud storage. 5 This field indicates whether pod volume file system backup is used for all volumes by default. Set this value to false to back up the PVCs that you want. Apply the changes to the YAML file by entering the following command: USD oc apply -f <backup_file_name>.yaml Replace <backup_file_name> with the name of your file. Monitor the backup process in the backup object status and in the Velero logs. To monitor the backup object status, enter the following command: USD watch "oc get backups.velero.io -n openshift-adp <backup_file_name> -o jsonpath='{.status}' | jq" To monitor the Velero logs, enter the following command: USD oc logs -n openshift-adp -ldeploy=velero -f Verification When the status.phase field is Completed , the backup process is considered complete. 8.5.2. Restoring a hosted cluster on OpenShift Virtualization After you back up a hosted cluster on OpenShift Virtualization, you can restore the backup. Note The restore process can be completed only on the same management cluster where you created the backup. Procedure Ensure that no pods or persistent volume claims (PVCs) are running in the HostedControlPlane namespace. Delete the following objects from the management cluster: HostedCluster NodePool PVCs Create a restoration manifest YAML file that is similar to the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: hc-clusters-hosted-restore namespace: openshift-adp spec: backupName: hc-clusters-hosted-backup restorePVs: true 1 existingResourcePolicy: update 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io 1 This field starts the recovery of pods with the included persistent volumes. 2 Setting existingResourcePolicy to update ensures that any existing objects are overwritten with backup content. This action can cause issues with objects that contain immutable fields, which is why you deleted the HostedCluster , node pools, and PVCs. If you do not set this policy, the Velero engine skips the restoration of objects that already exist. Apply the changes to the YAML file by entering the following command: USD oc apply -f <restore_resource_file_name>.yaml Replace <restore_resource_file_name> with the name of your file. Monitor the restore process by checking the restore status field and the Velero logs. To check the restore status field, enter the following command: USD watch "oc get restores.velero.io -n openshift-adp <backup_file_name> -o jsonpath='{.status}' | jq" To check the Velero logs, enter the following command: USD oc logs -n openshift-adp -ldeploy=velero -f Verification When the status.phase field is Completed , the restore process is considered complete. steps After some time, the KubeVirt VMs are created and join the hosted cluster as compute nodes. Make sure that the hosted cluster workloads are running again as expected. 8.6. Disaster recovery for a hosted cluster in AWS You can recover a hosted cluster to the same region within Amazon Web Services (AWS). For example, you need disaster recovery when the upgrade of a management cluster fails and the hosted cluster is in a read-only state. The disaster recovery process involves the following steps: Backing up the hosted cluster on the source management cluster Restoring the hosted cluster on a destination management cluster Deleting the hosted cluster from the source management cluster Your workloads remain running during the process. The Cluster API might be unavailable for a period, but that does not affect the services that are running on the worker nodes. Important Both the source management cluster and the destination management cluster must have the --external-dns flags to maintain the API server URL. That way, the server URL ends with https://api-sample-hosted.sample-hosted.aws.openshift.com . See the following example: Example: External DNS flags --external-dns-provider=aws \ --external-dns-credentials=<path_to_aws_credentials_file> \ --external-dns-domain-filter=<basedomain> If you do not include the --external-dns flags to maintain the API server URL, you cannot migrate the hosted cluster. 8.6.1. Overview of the backup and restore process The backup and restore process works as follows: On management cluster 1, which you can think of as the source management cluster, the control plane and workers interact by using the external DNS API. The external DNS API is accessible, and a load balancer sits between the management clusters. You take a snapshot of the hosted cluster, which includes etcd, the control plane, and the worker nodes. During this process, the worker nodes continue to try to access the external DNS API even if it is not accessible, the workloads are running, the control plane is saved in a local manifest file, and etcd is backed up to an S3 bucket. The data plane is active and the control plane is paused. On management cluster 2, which you can think of as the destination management cluster, you restore etcd from the S3 bucket and restore the control plane from the local manifest file. During this process, the external DNS API is stopped, the hosted cluster API becomes inaccessible, and any workers that use the API are unable to update their manifest files, but the workloads are still running. The external DNS API is accessible again, and the worker nodes use it to move to management cluster 2. The external DNS API can access the load balancer that points to the control plane. On management cluster 2, the control plane and worker nodes interact by using the external DNS API. The resources are deleted from management cluster 1, except for the S3 backup of etcd. If you try to set up the hosted cluster again on mangagement cluster 1, it will not work. 8.6.2. Backing up a hosted cluster To recover your hosted cluster in your target management cluster, you first need to back up all of the relevant data. Procedure Create a configmap file to declare the source management cluster by entering this command: USD oc create configmap mgmt-parent-cluster -n default \ --from-literal=from=USD{MGMT_CLUSTER_NAME} Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands: USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} \ -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 \ kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} \ -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} \ -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 \ kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator Back up etcd and upload the data to an S3 bucket by running this bash script: Tip Wrap this script in a function and call it from the main function. # ETCD Backup ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH="/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac "USD{SECRET_KEY}" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done For more information about backing up etcd, see "Backing up and restoring etcd on a hosted cluster". Back up Kubernetes and OpenShift Container Platform objects by entering the following commands. You need to back up the following objects: HostedCluster and NodePool objects from the HostedCluster namespace HostedCluster secrets from the HostedCluster namespace HostedControlPlane from the Hosted Control Plane namespace Cluster from the Hosted Control Plane namespace AWSCluster , AWSMachineTemplate , and AWSMachine from the Hosted Control Plane namespace MachineDeployments , MachineSets , and Machines from the Hosted Control Plane namespace ControlPlane secrets from the Hosted Control Plane namespace USD mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} \ USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD chmod 700 USD{BACKUP_DIR}/namespaces/ # HostedCluster USD echo "Backing Up HostedCluster Objects:" USD oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml USD echo "--> HostedCluster" USD sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml # NodePool USD oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml USD echo "--> NodePool" USD sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml # Secrets in the HC Namespace USD echo "--> HostedCluster Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep "^USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done # Secrets in the HC Control Plane Namespace USD echo "--> HostedCluster ControlPlane Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v "docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done # Hosted Control Plane USD echo "--> HostedControlPlane:" USD oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml # Cluster USD echo "--> Cluster:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) USD oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml # AWS Cluster USD echo "--> AWS Cluster:" USD oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml # AWS MachineTemplate USD echo "--> AWS Machine Template:" USD oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml # AWS Machines USD echo "--> AWS Machine:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done # MachineDeployments USD echo "--> HostedCluster MachineDeployments:" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done # MachineSets USD echo "--> HostedCluster MachineSets:" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done # Machines USD echo "--> HostedCluster Machine:" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done Clean up the ControlPlane routes by entering this command: USD oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all By entering that command, you enable the ExternalDNS Operator to delete the Route53 entries. Verify that the Route53 entries are clean by running this script: function clean_routes() { if [[ -z "USD{1}" ]];then echo "Give me the NS where to clean the routes" exit 1 fi # Constants if [[ -z "USD{2}" ]];then echo "Give me the Route53 zone ID" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo "Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}..." echo "Try: (USD{count}/USD{timeout})" sleep 10 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for cleaning the Route53 DNS records" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } # SAMPLE: clean_routes "<HC ControlPlane Namespace>" "<AWS_ZONE_ID>" clean_routes "USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}" "USD{AWS_ZONE_ID}" Verification Check all of the OpenShift Container Platform objects and the S3 bucket to verify that everything looks as expected. steps Restore your hosted cluster. 8.6.3. Restoring a hosted cluster Gather all of the objects that you backed up and restore them in your destination management cluster. Prerequisites You backed up the data from your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT2_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT2_KUBECONFIG} . Procedure Verify that the new management cluster does not contain any namespaces from the cluster that you are restoring by entering these commands: # Just in case USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup # Namespace deletion in the destination Management cluster USD oc delete ns USD{HC_CLUSTER_NS} || true USD oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true Re-create the deleted namespaces by entering these commands: # Namespace creation USD oc new-project USD{HC_CLUSTER_NS} USD oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Restore the secrets in the HC namespace by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-* Restore the objects in the HostedCluster control plane namespace by entering these commands: # Secrets USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* # Cluster USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-* If you are recovering the nodes and the node pool to reuse AWS instances, restore the objects in the HC control plane namespace by entering these commands: # AWS USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* # Machines USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-* Restore the etcd data and the hosted cluster by running this bash script: ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT="s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - "USD{ETCD_SNAPSHOT_URL}" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e "/type: PersistentVolume/r USD{HC_RESTORE_FILE}" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == "" ]];then echo "Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace" oc apply -f USD{HC_NEW_FILE} else echo "HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step" fi If you are recovering the nodes and the node pool to reuse AWS instances, restore the node pool by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-* Verification To verify that the nodes are fully restored, use this function: timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo "Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}" echo "Try: (USD{count}/USD{timeout})" sleep 30 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for Nodes in the destination MGMT Cluster" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 done steps Shut down and delete your cluster. 8.6.4. Deleting a hosted cluster from your source management cluster After you back up your hosted cluster and restore it to your destination management cluster, you shut down and delete the hosted cluster on your source management cluster. Prerequisites You backed up your data and restored it to your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT_KUBECONFIG} . Procedure Scale the deployment and statefulset objects by entering these commands: Important Do not scale the stateful set if the value of its spec.persistentVolumeClaimRetentionPolicy.whenScaled field is set to Delete , because this could lead to a loss of data. As a workaround, update the value of the spec.persistentVolumeClaimRetentionPolicy.whenScaled field to Retain . Ensure that no controllers exist that reconcile the stateful set and would return the value back to Delete , which could lead to a loss of data. # Just in case USD export KUBECONFIG=USD{MGMT_KUBECONFIG} # Scale down deployments USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD sleep 15 Delete the NodePool objects by entering these commands: NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName=="'USD{HC_CLUSTER_NAME}'")].metadata.name}') if [[ ! -z "USD{NODEPOOLS}" ]];then oc patch -n "USD{HC_CLUSTER_NS}" nodepool USD{NODEPOOLS} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi Delete the machine and machineset objects by entering these commands: # Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done USD oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true Delete the cluster object by entering these commands: # Cluster USD C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all Delete the AWS machines (Kubernetes objects) by entering these commands. Do not worry about deleting the real AWS machines. The cloud instances will not be affected. # AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done Delete the HostedControlPlane and ControlPlane HC namespace objects by entering these commands: # Delete HCP and ControlPlane HC NS USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all USD oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true Delete the HostedCluster and HC namespace objects by entering these commands: # Delete HC and HC Namespace USD oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{"metadata":{"finalizers":null}}' --type merge || true USD oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true USD oc delete ns USD{HC_CLUSTER_NS} || true Verification To verify that everything works, enter these commands: # Validations USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD oc get hc -n USD{HC_CLUSTER_NS} USD oc get np -n USD{HC_CLUSTER_NS} USD oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} # Inside the HostedCluster USD export KUBECONFIG=USD{HC_KUBECONFIG} USD oc get clusterversion USD oc get nodes steps Delete the OVN pods in the hosted cluster so that you can connect to the new OVN control plane that runs in the new management cluster: Load the KUBECONFIG environment variable with the hosted cluster's kubeconfig path. Enter this command: USD oc delete pod -n openshift-ovn-kubernetes --all 8.7. Disaster recovery for a hosted cluster by using OADP You can use the OpenShift API for Data Protection (OADP) Operator to perform disaster recovery on Amazon Web Services (AWS) and bare metal. The disaster recovery process with OpenShift API for Data Protection (OADP) involves the following steps: Preparing your platform, such as Amazon Web Services or bare metal, to use OADP Backing up the data plane workload Backing up the control plane workload Restoring a hosted cluster by using OADP 8.7.1. Prerequisites You must meet the following prerequisites on the management cluster: You installed the OADP Operator . You created a storage class. You have access to the cluster with cluster-admin privileges. You have access to the OADP subscription through a catalog source. You have access to a cloud storage provider that is compatible with OADP, such as S3, Microsoft Azure, Google Cloud Platform, or MinIO. In a disconnected environment, you have access to a self-hosted storage provider, for example Red Hat OpenShift Data Foundation or MinIO , that is compatible with OADP. Your hosted control planes pods are up and running. 8.7.2. Preparing AWS to use OADP To perform disaster recovery for a hosted cluster, you can use OpenShift API for Data Protection (OADP) on Amazon Web Services (AWS) S3 compatible storage. After creating the DataProtectionApplication object, new velero deployment and node-agent pods are created in the openshift-adp namespace. To prepare AWS to use OADP, see "Configuring the OpenShift API for Data Protection with Multicloud Object Gateway". Additional resources Configuring the OpenShift API for Data Protection with Multicloud Object Gateway steps Backing up the data plane workload Backing up the control plane workload 8.7.3. Preparing bare metal to use OADP To perform disaster recovery for a hosted cluster, you can use OpenShift API for Data Protection (OADP) on bare metal. After creating the DataProtectionApplication object, new velero deployment and node-agent pods are created in the openshift-adp namespace. To prepare bare metal to use OADP, see "Configuring the OpenShift API for Data Protection with AWS S3 compatible storage". Additional resources Configuring the OpenShift API for Data Protection with AWS S3 compatible storage steps Backing up the data plane workload Backing up the control plane workload 8.7.4. Backing up the data plane workload If the data plane workload is not important, you can skip this procedure. To back up the data plane workload by using the OADP Operator, see "Backing up applications". Additional resources Backing up applications steps Restoring a hosted cluster by using OADP 8.7.5. Backing up the control plane workload You can back up the control plane workload by creating the Backup custom resource (CR). To monitor and observe the backup process, see "Observing the backup and restore process". Procedure Pause the reconciliation of the HostedCluster resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' Get the infrastructure ID of your hosted cluster by running the following command: USD oc get hostedcluster -n local-cluster <hosted_cluster_name> -o=jsonpath="{.spec.infraID}" Note the infrastructure ID to use in the step. Pause the reconciliation of the cluster.cluster.x-k8s.io resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ patch cluster.cluster.x-k8s.io \ -n local-cluster-<hosted_cluster_name> <hosted_cluster_infra_id> \ --type json -p '[{"op": "add", "path": "/spec/paused", "value": true}]' Pause the reconciliation of the NodePool resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' Pause the reconciliation of the AgentCluster resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all' Pause the reconciliation of the AgentMachine resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all' Annotate the HostedCluster resource to prevent the deletion of the hosted control plane namespace by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ hypershift.openshift.io/skip-delete-hosted-controlplane-namespace=true Create a YAML file that defines the Backup CR: Example 8.1. Example backup-control-plane.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_resource_name> 1 namespace: openshift-adp labels: velero.io/storage-location: default spec: hooks: {} includedNamespaces: 2 - <hosted_cluster_namespace> 3 - <hosted_control_plane_namespace> 4 includedResources: - sa - role - rolebinding - pod - pvc - pv - bmh - configmap - infraenv 5 - priorityclasses - pdb - agents - hostedcluster - nodepool - secrets - hostedcontrolplane - cluster - agentcluster - agentmachinetemplate - agentmachine - machinedeployment - machineset - machine excludedResources: [] storageLocation: default ttl: 2h0m0s snapshotMoveData: true 6 datamover: "velero" 7 defaultVolumesToFsBackup: true 8 1 Replace backup_resource_name with the name of your Backup resource. 2 Selects specific namespaces to back up objects from them. You must include your hosted cluster namespace and the hosted control plane namespace. 3 Replace <hosted_cluster_namespace> with the name of the hosted cluster namespace, for example, clusters . 4 Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted . 5 You must create the infraenv resource in a separate namespace. Do not delete the infraenv resource during the backup process. 6 7 Enables the CSI volume snapshots and uploads the control plane workload automatically to the cloud storage. 8 Sets the fs-backup backing up method for persistent volumes (PVs) as default. This setting is useful when you use a combination of Container Storage Interface (CSI) volume snapshots and the fs-backup method. Note If you want to use CSI volume snapshots, you must add the backup.velero.io/backup-volumes-excludes=<pv_name> annotation to your PVs. Apply the Backup CR by running the following command: USD oc apply -f backup-control-plane.yaml Verification Verify if the value of the status.phase is Completed by running the following command: USD oc get backups.velero.io <backup_resource_name> -n openshift-adp \ -o jsonpath='{.status.phase}' steps Restoring a hosted cluster by using OADP 8.7.6. Restoring a hosted cluster by using OADP You can restore the hosted cluster by creating the Restore custom resource (CR). If you are using an in-place update, InfraEnv does not need spare nodes. You need to re-provision the worker nodes from the new management cluster. If you are using a replace update, you need some spare nodes for InfraEnv to deploy the worker nodes. Important After you back up your hosted cluster, you must destroy it to initiate the restoring process. To initiate node provisioning, you must back up workloads in the data plane before deleting the hosted cluster. Prerequisites You completed the steps in Removing a cluster by using the console to delete your hosted cluster. You completed the steps in Removing remaining resources after removing a cluster . To monitor and observe the backup process, see "Observing the backup and restore process". Procedure Verify that no pods and persistent volume claims (PVCs) are present in the hosted control plane namespace by running the following command: USD oc get pod pvc -n <hosted_control_plane_namespace> Expected output No resources found Create a YAML file that defines the Restore CR: Example restore-hosted-cluster.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_resource_name> 1 namespace: openshift-adp spec: backupName: <backup_resource_name> 2 restorePVs: true 3 existingResourcePolicy: update 4 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io 1 Replace <restore_resource_name> with the name of your Restore resource. 2 Replace <backup_resource_name> with the name of your Backup resource. 3 Initiates the recovery of persistent volumes (PVs) and its pods. 4 Ensures that the existing objects are overwritten with the backed up content. Important You must create the infraenv resource in a separate namespace. Do not delete the infraenv resource during the restore process. The infraenv resource is mandatory for the new nodes to be reprovisioned. Apply the Restore CR by running the following command: USD oc apply -f restore-hosted-cluster.yaml Verify if the value of the status.phase is Completed by running the following command: USD oc get hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> \ -o jsonpath='{.status.phase}' After the restore process is complete, start the reconciliation of the HostedCluster and NodePool resources that you paused during backing up of the control plane workload: Start the reconciliation of the HostedCluster resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json \ -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]' Start the reconciliation of the NodePool resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json \ -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]' Start the reconciliation of the Agent provider resources that you paused during backing up of the control plane workload: Start the reconciliation of the AgentCluster resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all Start the reconciliation of the AgentMachine resource by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all Remove the hypershift.openshift.io/skip-delete-hosted-controlplane-namespace- annotation in the HostedCluster resource to avoid manually deleting the hosted control plane namespace by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ hypershift.openshift.io/skip-delete-hosted-controlplane-namespace- \ --overwrite=true --all Scale the NodePool resource to the desired number of replicas by running the following command: USD oc --kubeconfig <management_cluster_kubeconfig_file> \ scale nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --replicas <replica_count> 1 1 Replace <replica_count> by an integer value, for example, 3 . 8.7.7. Observing the backup and restore process When using OpenShift API for Data Protection (OADP) to backup and restore a hosted cluster, you can monitor and observe the process. Procedure Observe the backup process by running the following command: USD watch "oc get backups.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'" Observe the restore process by running the following command: USD watch "oc get restores.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'" Observe the Velero logs by running the following command: USD oc logs -n openshift-adp -ldeploy=velero -f Observe the progress of all of the OADP objects by running the following command: USD watch "echo BackupRepositories:;echo;oc get backuprepositories.velero.io -A;echo; echo BackupStorageLocations: ;echo; oc get backupstoragelocations.velero.io -A;echo;echo DataUploads: ;echo;oc get datauploads.velero.io -A;echo;echo DataDownloads: ;echo;oc get datadownloads.velero.io -n openshift-adp; echo;echo VolumeSnapshotLocations: ;echo;oc get volumesnapshotlocations.velero.io -A;echo;echo Backups:;echo;oc get backup -A; echo;echo Restores:;echo;oc get restore -A" 8.7.8. Using the velero CLI to describe the Backup and Restore resources When using OpenShift API for Data Protection, you can get more details of the Backup and Restore resources by using the velero command-line interface (CLI). Procedure Create an alias to use the velero CLI from a container by running the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Get details of your Restore custom resource (CR) by running the following command: USD velero restore describe <restore_resource_name> --details 1 1 Replace <restore_resource_name> with the name of your Restore resource. Get details of your Backup CR by running the following command: USD velero restore describe <backup_resource_name> --details 1 1 Replace <backup_resource_name> with the name of your Backup resource. | [
"oc rsh -n openshift-etcd -c etcd <etcd_pod_name>",
"sh-4.4# etcdctl endpoint status -w table",
"+------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.1xxx.20:2379 | 8fxxxxxxxxxx | 3.5.12 | 123 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.21:2379 | a5xxxxxxxxxx | 3.5.12 | 122 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.22:2379 | 7cxxxxxxxxxx | 3.5.12 | 124 MB | true | false | 10 | 180156 | 180156 | | +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m",
"oc delete pods etcd-2 -n openshift-etcd",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s",
"CLUSTER_NAME=my-cluster",
"HOSTED_CLUSTER_NAMESPACE=clusters",
"CONTROL_PLANE_NAMESPACE=\"USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}\"",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"ETCD_POD=etcd-0",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=https://localhost:2379 snapshot save /var/lib/snapshot.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2",
"ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }')",
"cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data",
"DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2)",
"oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db --data-dir=/var/lib/data --skip-hash-check --name etcd-0 --initial-cluster-token=etcd-cluster --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"null\"}}' --type=merge",
"oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db",
"BUCKET_NAME=somebucket CLUSTER_NAME=cluster_name FILEPATH=\"/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` HOSTED_CLUSTER_NAMESPACE=hosted_cluster_namespace exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db",
"oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {\"activeKey\":{\"name\":\"<hosted_cluster_name>-etcd-encryption-key\"}}",
"oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}'",
"oc scale deployment -n <control_plane_namespace> --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc patch -n <hosted_cluster_namespace> -p '[\\{\"op\": \"remove\", \"path\": \"/spec/pausedUntil\"}]' --type=json",
"ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-\"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT})",
"spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - \"USD{ETCD_SNAPSHOT_URL}\" managementType: Managed",
"apiVersion: velero.io/v1 kind: Backup metadata: name: hc-clusters-hosted-backup namespace: openshift-adp labels: velero.io/storage-location: default spec: includedNamespaces: 1 - clusters - clusters-hosted includedResources: - sa - role - rolebinding - deployment - statefulset - pv - pvc - bmh - configmap - infraenv - priorityclasses - pdb - hostedcluster - nodepool - secrets - hostedcontrolplane - cluster - datavolume - service - route excludedResources: [ ] labelSelector: 2 matchExpressions: - key: 'hypershift.openshift.io/is-kubevirt-rhcos' operator: 'DoesNotExist' storageLocation: default preserveNodePorts: true ttl: 4h0m0s snapshotMoveData: true 3 datamover: \"velero\" 4 defaultVolumesToFsBackup: false 5",
"oc apply -f <backup_file_name>.yaml",
"watch \"oc get backups.velero.io -n openshift-adp <backup_file_name> -o jsonpath='{.status}' | jq\"",
"oc logs -n openshift-adp -ldeploy=velero -f",
"apiVersion: velero.io/v1 kind: Restore metadata: name: hc-clusters-hosted-restore namespace: openshift-adp spec: backupName: hc-clusters-hosted-backup restorePVs: true 1 existingResourcePolicy: update 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io",
"oc apply -f <restore_resource_file_name>.yaml",
"watch \"oc get restores.velero.io -n openshift-adp <backup_file_name> -o jsonpath='{.status}' | jq\"",
"oc logs -n openshift-adp -ldeploy=velero -f",
"--external-dns-provider=aws --external-dns-credentials=<path_to_aws_credentials_file> --external-dns-domain-filter=<basedomain>",
"oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME}",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"ETCD Backup ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH=\"/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac \"USD{SECRET_KEY}\" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done",
"mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} chmod 700 USD{BACKUP_DIR}/namespaces/ HostedCluster echo \"Backing Up HostedCluster Objects:\" oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml echo \"--> HostedCluster\" sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml NodePool oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml echo \"--> NodePool\" sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml Secrets in the HC Namespace echo \"--> HostedCluster Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep \"^USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done Secrets in the HC Control Plane Namespace echo \"--> HostedCluster ControlPlane Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v \"docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done Hosted Control Plane echo \"--> HostedControlPlane:\" oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml Cluster echo \"--> Cluster:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml AWS Cluster echo \"--> AWS Cluster:\" oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml AWS MachineTemplate echo \"--> AWS Machine Template:\" oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml AWS Machines echo \"--> AWS Machine:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done MachineDeployments echo \"--> HostedCluster MachineDeployments:\" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done MachineSets echo \"--> HostedCluster MachineSets:\" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done Machines echo \"--> HostedCluster Machine:\" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done",
"oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"function clean_routes() { if [[ -z \"USD{1}\" ]];then echo \"Give me the NS where to clean the routes\" exit 1 fi # Constants if [[ -z \"USD{2}\" ]];then echo \"Give me the Route53 zone ID\" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo \"Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}...\" echo \"Try: (USD{count}/USD{timeout})\" sleep 10 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for cleaning the Route53 DNS records\" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } SAMPLE: clean_routes \"<HC ControlPlane Namespace>\" \"<AWS_ZONE_ID>\" clean_routes \"USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}\" \"USD{AWS_ZONE_ID}\"",
"Just in case export KUBECONFIG=USD{MGMT2_KUBECONFIG} BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup Namespace deletion in the destination Management cluster oc delete ns USD{HC_CLUSTER_NS} || true oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true",
"Namespace creation oc new-project USD{HC_CLUSTER_NS} oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-*",
"Secrets oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* Cluster oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-*",
"AWS oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* Machines oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-*",
"ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT=\"s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - \"USD{ETCD_SNAPSHOT_URL}\" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e \"/type: PersistentVolume/r USD{HC_RESTORE_FILE}\" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == \"\" ]];then echo \"Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace\" oc apply -f USD{HC_NEW_FILE} else echo \"HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step\" fi",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-*",
"timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo \"Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}\" echo \"Try: (USD{count}/USD{timeout})\" sleep 30 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for Nodes in the destination MGMT Cluster\" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 done",
"Just in case export KUBECONFIG=USD{MGMT_KUBECONFIG} Scale down deployments oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all sleep 15",
"NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName==\"'USD{HC_CLUSTER_NAME}'\")].metadata.name}') if [[ ! -z \"USD{NODEPOOLS}\" ]];then oc patch -n \"USD{HC_CLUSTER_NS}\" nodepool USD{NODEPOOLS} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi",
"Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true",
"Cluster C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done",
"Delete HCP and ControlPlane HC NS oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true",
"Delete HC and HC Namespace oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{\"metadata\":{\"finalizers\":null}}' --type merge || true oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true oc delete ns USD{HC_CLUSTER_NS} || true",
"Validations export KUBECONFIG=USD{MGMT2_KUBECONFIG} oc get hc -n USD{HC_CLUSTER_NS} oc get np -n USD{HC_CLUSTER_NS} oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Inside the HostedCluster export KUBECONFIG=USD{HC_KUBECONFIG} oc get clusterversion oc get nodes",
"oc delete pod -n openshift-ovn-kubernetes --all",
"oc --kubeconfig <management_cluster_kubeconfig_file> patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> --type json -p '[{\"op\": \"add\", \"path\": \"/spec/pausedUntil\", \"value\": \"true\"}]'",
"oc get hostedcluster -n local-cluster <hosted_cluster_name> -o=jsonpath=\"{.spec.infraID}\"",
"oc --kubeconfig <management_cluster_kubeconfig_file> patch cluster.cluster.x-k8s.io -n local-cluster-<hosted_cluster_name> <hosted_cluster_infra_id> --type json -p '[{\"op\": \"add\", \"path\": \"/spec/paused\", \"value\": true}]'",
"oc --kubeconfig <management_cluster_kubeconfig_file> patch nodepool -n <hosted_cluster_namespace> <node_pool_name> --type json -p '[{\"op\": \"add\", \"path\": \"/spec/pausedUntil\", \"value\": \"true\"}]'",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate agentcluster -n <hosted_control_plane_namespace> cluster.x-k8s.io/paused=true --all'",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate agentmachine -n <hosted_control_plane_namespace> cluster.x-k8s.io/paused=true --all'",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> hypershift.openshift.io/skip-delete-hosted-controlplane-namespace=true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_resource_name> 1 namespace: openshift-adp labels: velero.io/storage-location: default spec: hooks: {} includedNamespaces: 2 - <hosted_cluster_namespace> 3 - <hosted_control_plane_namespace> 4 includedResources: - sa - role - rolebinding - pod - pvc - pv - bmh - configmap - infraenv 5 - priorityclasses - pdb - agents - hostedcluster - nodepool - secrets - hostedcontrolplane - cluster - agentcluster - agentmachinetemplate - agentmachine - machinedeployment - machineset - machine excludedResources: [] storageLocation: default ttl: 2h0m0s snapshotMoveData: true 6 datamover: \"velero\" 7 defaultVolumesToFsBackup: true 8",
"oc apply -f backup-control-plane.yaml",
"oc get backups.velero.io <backup_resource_name> -n openshift-adp -o jsonpath='{.status.phase}'",
"oc get pod pvc -n <hosted_control_plane_namespace>",
"No resources found",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_resource_name> 1 namespace: openshift-adp spec: backupName: <backup_resource_name> 2 restorePVs: true 3 existingResourcePolicy: update 4 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io",
"oc apply -f restore-hosted-cluster.yaml",
"oc get hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> -o jsonpath='{.status.phase}'",
"oc --kubeconfig <management_cluster_kubeconfig_file> patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> --type json -p '[{\"op\": \"add\", \"path\": \"/spec/pausedUntil\", \"value\": \"false\"}]'",
"oc --kubeconfig <management_cluster_kubeconfig_file> patch nodepool -n <hosted_cluster_namespace> <node_pool_name> --type json -p '[{\"op\": \"add\", \"path\": \"/spec/pausedUntil\", \"value\": \"false\"}]'",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate agentcluster -n <hosted_control_plane_namespace> cluster.x-k8s.io/paused- --overwrite=true --all",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate agentmachine -n <hosted_control_plane_namespace> cluster.x-k8s.io/paused- --overwrite=true --all",
"oc --kubeconfig <management_cluster_kubeconfig_file> annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> hypershift.openshift.io/skip-delete-hosted-controlplane-namespace- --overwrite=true --all",
"oc --kubeconfig <management_cluster_kubeconfig_file> scale nodepool -n <hosted_cluster_namespace> <node_pool_name> --replicas <replica_count> 1",
"watch \"oc get backups.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'\"",
"watch \"oc get restores.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'\"",
"oc logs -n openshift-adp -ldeploy=velero -f",
"watch \"echo BackupRepositories:;echo;oc get backuprepositories.velero.io -A;echo; echo BackupStorageLocations: ;echo; oc get backupstoragelocations.velero.io -A;echo;echo DataUploads: ;echo;oc get datauploads.velero.io -A;echo;echo DataDownloads: ;echo;oc get datadownloads.velero.io -n openshift-adp; echo;echo VolumeSnapshotLocations: ;echo;oc get volumesnapshotlocations.velero.io -A;echo;echo Backups:;echo;oc get backup -A; echo;echo Restores:;echo;oc get restore -A\"",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero restore describe <restore_resource_name> --details 1",
"velero restore describe <backup_resource_name> --details 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/high-availability-for-hosted-control-planes |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.412_release_notes/openjdk8-temurin-support-policy |
Chapter 79. Openshift Builds | Chapter 79. Openshift Builds Since Camel 2.17 Only producer is supported The Openshift Builds component is one of the Kubernetes Components which provides a producer to execute Openshift builds operations. 79.1. Dependencies When using openshift-builds with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 79.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 79.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 79.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 79.3. Component Options The Openshift Builds component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 79.4. Endpoint Options The Openshift Builds endpoint is configured using URI syntax: with the following path and query parameters: 79.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 79.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 79.5. Message Headers The Openshift Builds component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesBuildsLabels (producer) Constant: KUBERNETES_BUILDS_LABELS The Openshift build labels. Map CamelKubernetesBuildName (producer) Constant: KUBERNETES_BUILD_NAME The Openshift build name. String 79.6. Supported producer operation listBuilds listBuildsByLabels getBuild 79.7. Openshift Builds Producer Examples listBuilds: this operation list the Builds on an Openshift cluster. from("direct:list"). toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds"). to("mock:result"); This operation returns a List of Builds from your Openshift cluster. listBuildsByLabels: this operation list the builds by labels on an Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels"). to("mock:result"); This operation returns a List of Builds from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 79.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"openshift-builds:masterUrl",
"from(\"direct:list\"). toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels\"). to(\"mock:result\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-builds-component-starter |
Chapter 4. OpenShift Virtualization release notes | Chapter 4. OpenShift Virtualization release notes 4.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 4.2. About Red Hat OpenShift Virtualization Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects. OpenShift Virtualization is represented by the icon. You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider. Learn more about what you can do with OpenShift Virtualization . Learn more about OpenShift Virtualization architecture and deployments . Prepare your cluster for OpenShift Virtualization. 4.2.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.11 is supported for use on OpenShift Container Platform 4.11 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 4.2.2. Supported guest operating systems To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization . 4.3. New and changed features You can now deploy OpenShift Virtualization on a three-node cluster with zero compute nodes. Virtual machines run as unprivileged workloads in session mode by default. This feature improves cluster security by mitigating escalation-of-privilege attacks. Red Hat Enterprise Linux (RHEL) 9 is now supported as a guest operating system. The link for installing the Migration Toolkit for Virtualization (MTV) Operator in the OpenShift Container Platform web console has been moved. It is now located in the Related operators section of the Getting started resources card on the Virtualization Overview page. You can configure the verbosity level of the virtLauncher , virtHandler , virtController , virtAPI , and virtOperator pod logs to debug specific components by editing the HyperConverged custom resource (CR). 4.3.1. Quick starts Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts . You can filter the available tours by entering the virtualization keyword in the Filter field. 4.3.2. Storage New metrics are available that provide information about virtual machine snapshots . You can reduce the number of logs on disconnected environments or reduce resource usage by disabling the automatic imports and updates for a boot source . 4.3.3. Web console You can set the boot mode of templates and virtual machines to BIOS , UEFI , or UEFI (secure) by using the web console. You can now enable and disable the descheduler from the web console on the Scheduling tab of the VirtualMachine details page. You can access virtual machines by navigating to Virtualization VirtualMachines in the side menu. Each virtual machine now has an updated Overview tab that provides information about the virtual machine configuration, alerts, snapshots, network interfaces, disks, usage data, and hardware devices. The Create a Virtual Machine wizard in the web console is now replaced by the Catalog page , which lists available templates that you can use to create a virtual machine. You can use a template with an available boot source to quickly create a virtual machine or you can customize a template to create a virtual machine. If your Windows virtual machine has a vGPU attached, you can now switch between the default display and the vGPU display by using the web console. You can access virtual machine templates by navigating to Virtualization Templates in the side menu. The updated VirtualMachine Templates page now provides useful information about each template, including workload profile, boot source, and CPU and memory configuration. The Create Template wizard has been removed from the VirtualMachine Templates page. You create a virtual machine template by editing a YAML file example. 4.4. Deprecated and removed features 4.4.1. Deprecated features Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments. In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in OpenShift Virtualization 4.11, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to create a storage class for the CSI driver as part of your migration strategy. 4.4.2. Removed features Removed features are not supported in the current release. OpenShift Virtualization 4.11 removes support for nmstate , including the following objects: NodeNetworkState NodeNetworkConfigurationPolicy NodeNetworkConfigurationEnactment To preserve and support your existing nmstate configuration, install the Kubernetes NMState Operator before updating to OpenShift Virtualization 4.11. You can install it from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later releases: Move all nodes out of maintenance mode. Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io CR. You can no longer mark virtual machine templates as favorites. 4.5. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.11 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide . You can now deploy OpenShift Virtualization on AWS bare metal nodes . OpenShift Virtualization has critical alerts that inform you when a problem occurs that requires immediate attention. Now, each alert has a corresponding description of the problem, a reason for why the alert is occurring, a troubleshooting process to diagnose the source of the problem, and steps for resolving the alert. Administrators can now declaratively create and expose mediated devices such as virtual graphics processing units (vGPUs) by editing the HyperConverged CR. Virtual machine owners can then assign these devices to VMs. You can transfer the static IP configuration of the NIC attached to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster. You can now install OpenShift Virtualization on IBM Cloud bare-metal servers. Bare-metal servers offered by other cloud providers are not supported. You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles . OpenShift Virtualization now includes a diagnostic framework to run predefined checkups that can be used for cluster maintenance and troubleshooting. You can run a predefined checkup to check network connectivity and latency for virtual machines on a secondary network. You can create live migration policies with specific parameters, such as bandwidth usage, maximum number of parallel migrations, and timeout, and apply the policies to groups of virtual machines by using virtual machine and namespace labels. 4.6. Bug fixes Previously, on a large cluster, the OpenShift Virtualization MAC pool manager would take too much time to boot and OpenShift Virtualization might not become ready. With this update, the pool initialization and startup latency is reduced. As a result, VMs can now be successfully defined. ( BZ#2035344 ) If a Windows VM crashes or hangs during shutdown, you can now manually issue a force shutdown request to stop the VM. ( BZ#2040766 ) The YAML examples in the VM wizard have now been updated to contain the latest upstream changes. ( BZ#2055492 ) The Add Network Interface button on the VM Network Interfaces tab is no longer disabled for non-privileged users. ( BZ#2056420 ) A non-privileged user can now successfully add disks to a VM without getting a RBAC rule error. ( BZ#2056421 ) The web console now successfully displays virtual machine templates that are deployed to a custom namespace. ( BZ#2054650 ) Previously, updating a Single Node OpenShift (SNO) cluster failed if the spec.evictionStrategy field was set to LiveMigrate for a VMI. For live migration to succeed, the cluster must have more than one compute node. With this update, the spec.evictionStrategy field is removed from the virtual machine template in a SNO environment. As a result, cluster update is now successful. ( BZ#2073880 ) 4.7. Known issues You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. ( BZ#2193267 ) In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. ( BZ#2151169 ) When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs storage class fail to migrate and the VM status changes to Paused . This is because both pods try to access the shared ReadWriteMany CephFS volume at the same time. ( BZ#2092271 ) As a workaround, use the ocs-storagecluster-ceph-rbd storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage. Restoring a VM snapshot fails if you update OpenShift Container Platform to version 4.11 without also updating OpenShift Virtualization. This is due to a mismatch between the API versions used for snapshot objects. ( BZ#2159442 ) As a workaround, update OpenShift Virtualization to the same minor version as OpenShift Container Platform. To ensure that the versions are kept in sync, use the recommended Automatic approval strategy. Uninstalling OpenShift Virtualization does not remove the node labels created by OpenShift Virtualization. You must remove the labels manually. ( CNV-22036 ) The OVN-Kubernetes cluster network provider crashes from peak RAM and CPU usage if you create a large number of NodePort services. This can happen if you use NodePort services to expose SSH access to a large number of virtual machines (VMs). ( OCPBUGS-1940 ) As a workaround, use the OpenShift SDN cluster network provider if you want to expose SSH access to a large number of VMs via NodePort services. Updating to OpenShift Virtualization 4.11 from version 4.10 is blocked until you install the standalone Kubernetes NMState Operator. This occurs even if your cluster configuration does not use any nmstate resources. ( BZ#2126537 ) As a workaround: Verify that there are no node network configuration policies defined on the cluster: USD oc get nncp Choose the appropriate method to update OpenShift Virtualization: If the list of node network configuration policies is not empty, exit this procedure and install the Kubernetes NMState Operator to preserve and support your existing nmstate configuration. If the list is empty, go to step 3. Annotate the HyperConverged custom resource (CR). The following command overwrites any existing JSON patches: USD oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[{"op": "replace","path": "/spec/nmstate", "value": null}]' Note The HyperConverged object reports a TaintedConfiguration condition while this patch is applied. This is benign. Update OpenShift Virtualization. After the update completes, remove the annotation by running the following command: USD oc annotate -n openshift-cnv hco kubevirt-hyperconverged networkaddonsconfigs.kubevirt.io/jsonpatch- Optional: Add back any previously configured JSON patches that were overwritten. Some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI) can cause the virtual machine snapshot restore operation to hang indefinitely. ( BZ#2070366 ) As a workaround, you can remove the annotations manually: Obtain the VirtualMachineSnapshotContent custom resource (CR) name from the status.virtualMachineSnapshotContentName value in the VirtualMachineSnapshot CR. Edit the VirtualMachineSnapshotContent CR and remove all lines that contain k8s.io/cloneRequest . If you did not specify a value for spec.dataVolumeTemplates in the VirtualMachine object, delete any DataVolume and PersistentVolumeClaim objects in this namespace where both of the following conditions are true: The object's name begins with restore- . The object is not referenced by virtual machines. This step is optional if you specified a value for spec.dataVolumeTemplates . Repeat the restore operation with the updated VirtualMachineSnapshot CR. Windows 11 virtual machines do not boot on clusters running in FIPS mode . Windows 11 requires a TPM (trusted platform module) device by default. However, the swtpm (software TPM emulator) package is incompatible with FIPS. ( BZ#2089301 ) In a Single Node OpenShift (SNO) cluster, a VMCannotBeEvicted alert occurs on virtual machines that are created from common templates that have the eviction strategy set to LiveMigrate . ( BZ#2092412 ) The QEMU guest agent on a Fedora 35 virtual machine is blocked by SELinux and does not report data. Other Fedora versions might be affected. ( BZ#2028762 ) As a workaround, disable SELinux on the virtual machine, run the QEMU guest agent commands, and then re-enable SELinux. If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host's default interface because of a change in the host network topology of OVN-Kubernetes. ( BZ#1885605 ) As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider. If you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage, cloning more than 100 VMs at once might fail. ( BZ#1989527 ) As a workaround, you can perform a host-assisted copy by setting spec.cloneStrategy: copy in the storage profile manifest. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy 1 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 The default cloning method set to copy . In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. ( BZ#1992753 ) As a workaround, avoid using a single PVC in read-write mode with multiple VMs. The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. ( BZ#2026733 ) As a workaround, silence alerts . OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. ( BZ#2037611 ) As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod. If you configure the HyperConverged custom resource (CR) to enable mediated devices before drivers are installed, the new device configuration does not take effect. This issue can be triggered by updates. For example, if virt-handler is updated before daemonset , which installs NVIDIA drivers, then nodes cannot provide virtual machine GPUs. ( BZ#2046298 ) As a workaround: Remove mediatedDevicesConfiguration and permittedHostDevices from the HyperConverged CR. Update both mediatedDevicesConfiguration and permittedHostDevices stanzas with the configuration you want to use. If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. ( BZ#2055595 ) As a workaround, you can restart the ceph-mgr to purge the VM clones. | [
"oc get nncp",
"oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[{\"op\": \"replace\",\"path\": \"/spec/nmstate\", \"value\": null}]'",
"oc annotate -n openshift-cnv hco kubevirt-hyperconverged networkaddonsconfigs.kubevirt.io/jsonpatch-",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy 1 status: provisioner: <provisioner> storageClass: <provisioner_class>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/virt-4-11-release-notes |
Red Hat Ansible Automation Platform release notes | Red Hat Ansible Automation Platform release notes Red Hat Ansible Automation Platform 2.4 New features, enhancements, and bug fix information Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/index |
9.2. Import DDL | 9.2. Import DDL You can create source relational models by importing DDL using the steps below. In Model Explorer, right-click and then click Import... or click the File > Import... action. Select the import option Teiid Designer > DDL File (General) >> Source or View Model and click > . Note You can also choose DDL File (Teiid) which import metadata from Teiid DDL file. Select existing DDL from either Choose from file system... or Choose from workspace... . Set the Model folder location, enter or select valid model name, set Model type (Source Model or View Model), set desired options and click > (or Finish if enabled) Figure 9.2. DDL Import Options If you click > , a difference report is presented for viewing or deselecting individual relational entities. Click Finish to complete. Figure 9.3. Import DDL Dialog | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/import_ddl |
Chapter 14. Installing IBM Cloud Bare Metal (Classic) | Chapter 14. Installing IBM Cloud Bare Metal (Classic) 14.1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. A provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner Three control plane nodes One routable network One provisioning network Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud Bare Metal (Classic), address the following prerequisites and requirements. 14.1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure To deploy an OpenShift Container Platform cluster on IBM Cloud(R) Bare Metal (Classic) infrastructure, you must first provision the IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning network is required. You can customize IBM Cloud nodes using the IBM Cloud API. When creating IBM Cloud nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. IBM Cloud Bare Metal (Classic) uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 14.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server IBM Cloud Bare Metal (Classic) does not run DHCP on the public or private VLANs. After provisioning IBM Cloud nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud Bare Metal (Classic) provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud dashboard by navigating to Create resource Bare Metal Servers for Classic . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud CLI for details on installing the IBM Cloud CLI. Note IBM Cloud servers might take 3-5 hours to become available. 14.2. Setting up the environment for an OpenShift Container Platform installation 14.2.1. Preparing the provisioner node on IBM Cloud Bare Metal (Classic) infrastructure Perform the following steps to prepare the provisioner node. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt kni Start firewalld : USD sudo systemctl start firewalld Enable firewalld : USD sudo systemctl enable firewalld Start the http service: USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Set the ID of the provisioner node: USD PRVN_HOST_ID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl hardware list Set the ID of the public subnet: USD PUBLICSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the ID of the private subnet: USD PRIVSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the provisioner node public IP address: USD PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r) Set the CIDR for the public network: USD PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the public network: USD PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR Set the gateway for the public network: USD PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r) Set the private IP address of the provisioner node: USD PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r) Set the CIDR for the private network: USD PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the private network: USD PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR Set the gateway for the private network: USD PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r) Set up the bridges for the baremetal and provisioning networks: USD sudo nohup bash -c " nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 USDPRIV_GATEWAY\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 " Note For eth1 and eth2 , substitute the appropriate interface name, as needed. If required, SSH back into the provisioner node: # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2 Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure . In step 1, click Download pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 14.2.2. Configuring the public subnet All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud(R) Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node. You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set. Procedure Install dnsmasq : USD sudo dnf install dnsmasq Open the dnsmasq configuration file: USD sudo vi /etc/dnsmasq.conf Add the following configuration to the dnsmasq configuration file: interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile 1 Set the DHCP range. Replace both instances of <ip_addr> with one unused IP address from the public subnet so that the dhcp-range for the baremetal network begins and ends with the same the IP address. Replace <pub_cidr> with the CIDR of the public subnet. 2 Set the DHCP option. Replace <pub_gateway> with the IP address of the gateway for the baremetal network. Replace <prvn_priv_ip> with the IP address of the provisioner node's private IP address on the provisioning network. Replace <prvn_pub_ip> with the IP address of the provisioner node's public IP address on the baremetal network. To retrieve the value for <pub_cidr> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <pub_gateway> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <prvn_priv_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r Replace <id> with the ID of the provisioner node. To retrieve the value for <prvn_pub_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r Replace <id> with the ID of the provisioner node. Obtain the list of hardware for the cluster: USD ibmcloud sl hardware list Obtain the MAC addresses and IP addresses for each node: USD ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null Replace <id> with the ID of the node. Example output "10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5" Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the install-config.yaml file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public baremetal network, and the MAC addresses of the private provisioning network. Add the MAC and IP address pair of the public baremetal network for each node into the dnsmasq.hostsfile file: USD sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile Example input 00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1 ... Replace <mac>,<ip> with the public MAC address and public IP address of the corresponding node name. Start dnsmasq : USD sudo systemctl start dnsmasq Enable dnsmasq so that it starts when booting the node: USD sudo systemctl enable dnsmasq Verify dnsmasq is running: USD sudo systemctl status dnsmasq Example output ● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k Open ports 53 and 67 with UDP protocol: USD sudo firewall-cmd --add-port 53/udp --permanent USD sudo firewall-cmd --add-port 67/udp --permanent Add provisioning to the external zone with masquerade: USD sudo firewall-cmd --change-zone=provisioning --zone=external --permanent This step ensures network address translation for IPMI calls to the management subnet. Reload the firewalld configuration: USD sudo firewall-cmd --reload 14.2.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installer to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.10 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 14.2.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 14.2.5. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud(R) Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml file. Procedure Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: "/dev/sda" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: "/dev/sda" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 3 The bmc.address provides a privilegelevel configuration setting with the value set to OPERATOR . This is required for IBM Cloud Bare Metal (Classic) infrastructure. 2 4 Add the MAC address of the private provisioning network NIC for the corresponding node. Note You can use the ibmcloud command-line utility to retrieve the password. USD ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"' Replace <id> with the ID of the node. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file into the directory: USD cp install-config.yaml ~/clusterconfig Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 14.2.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 14.2. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIP (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIP (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Table 14.3. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 14.4. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 14.2.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 14.5. Subfields Subfield Description deviceName A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 14.2.8. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 14.2.9. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 14.2.10. Following the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>",
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.10 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfig",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installing-ibm-cloud-bare-metal-classic |
4.4. GFS Quota Management | 4.4. GFS Quota Management File-system quotas are used to limit the amount of file-system space a user or group can use. A user or group does not have a quota limit until one is set. GFS keeps track of the space used by each user and group even when there are no limits in place. GFS updates quota information in a transactional way so system crashes do not require quota usages to be reconstructed. To prevent a performance slowdown, a GFS node synchronizes updates to the quota file only periodically. The "fuzzy" quota accounting can allow users or groups to slightly exceed the set limit. To minimize this, GFS dynamically reduces the synchronization period as a "hard" quota limit is approached. GFS uses its gfs_quota command to manage quotas. Other Linux quota facilities cannot be used with GFS. 4.4.1. Setting Quotas Two quota settings are available for each user ID (UID) or group ID (GID): a hard limit and a warn limit . A hard limit is the amount of space that can be used. The file system will not let the user or group use more than that amount of disk space. A hard limit value of zero means that no limit is enforced. A warn limit is usually a value less than the hard limit. The file system will notify the user or group when the warn limit is reached to warn them of the amount of space they are using. A warn limit value of zero means that no limit is enforced. Limits are set using the gfs_quota command. The command only needs to be run on a single node where GFS is mounted. Usage Setting Quotas, Hard Limit Setting Quotas, Warn Limit User A user ID to limit or warn. It can be either a user name from the password file or the UID number. Group A group ID to limit or warn. It can be either a group name from the group file or the GID number. Size Specifies the new value to limit or warn. By default, the value is in units of megabytes. The additional -k , -s and -b flags change the units to kilobytes, sectors, and file-system blocks, respectively. MountPoint Specifies the GFS file system to which the actions apply. Examples This example sets the hard limit for user Bert to 1024 megabytes (1 gigabyte) on file system /gfs . This example sets the warn limit for group ID 21 to 50 kilobytes on file system /gfs . | [
"gfs_quota limit -u User -l Size -f MountPoint",
"gfs_quota limit -g Group -l Size -f MountPoint",
"gfs_quota warn -u User -l Size -f MountPoint",
"gfs_quota warn -g Group -l Size -f MountPoint",
"gfs_quota limit -u Bert -l 1024 -f /gfs",
"gfs_quota warn -g 21 -l 50 -k -f /gfs"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-quota |
Chapter 3. Encryption and Key Management | Chapter 3. Encryption and Key Management The Red Hat Ceph Storage cluster typically resides in its own network security zone, especially when using a private storage cluster network. Important Security zone separation might be insufficient for protection if an attacker gains access to Ceph clients on the public network. There are situations where there is a security requirement to assure the confidentiality or integrity of network traffic, and where Red Hat Ceph Storage uses encryption and key management, including: SSH SSL Termination Encryption in Transit Encryption at Rest 3.1. SSH All nodes in the Red Hat Ceph Storage cluster use SSH as part of deploying the cluster. This means that on each node: A cephadm user exists with password-less root privileges. The SSH service is enabled and by extension port 22 is open. A copy of the cephadm user's public SSH key is available. Important Any person with access to the cephadm user by extension has permission to run commands as root on any node in the Red Hat Ceph Storage cluster. Additional Resources See the How cephadm works section in the Red Hat Ceph Storage Installation Guide for more information. 3.2. SSL Termination The Ceph Object Gateway may be deployed in conjunction with HAProxy and keepalived for load balancing and failover. The object gateway Red Hat Ceph Storage versions 2 and 3 use Civetweb. Earlier versions of Civetweb do not support SSL and later versions support SSL with some performance limitations. The object gateway Red Hat Ceph Storage version 5 uses Beast. You can configure the Beast front-end web server to use the OpenSSL library to provide Transport Layer Security (TLS). When using HAProxy and keepalived to terminate SSL connections, the HAProxy and keepalived components use encryption keys. When using HAProxy and keepalived to terminate SSL, the connection between the load balancer and the Ceph Object Gateway is NOT encrypted. See Configuring SSL for Beast and HAProxy and keepalived for details. 3.3. Messenger v2 protocol The second version of Ceph's on-wire protocol, msgr2 , has the following features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads, enabling future integration of new authentication modes. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy v1-compatible, and the new, v2-compatible Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon uses the v2 protocol first, if possible, but if not, then the legacy v1 protocol is used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. Note The ability to bind to multiple ports has paved the way for dual-stack IPv4 and IPv6 support. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ceph Object Gateway Encryption Also, the Ceph Object Gateway supports encryption with customer-provided keys using its S3 API. Important To comply with regulatory compliance standards requiring strict encryption in transit, administrators MUST deploy the Ceph Object Gateway with client-side encryption. Ceph Block Device Encryption System administrators integrating Ceph as a backend for Red Hat OpenStack Platform 13 MUST encrypt Ceph block device volumes using dm_crypt for RBD Cinder to ensure on-wire encryption within the Ceph storage cluster. Important To comply with regulatory compliance standards requiring strict encryption in transit, system administrators MUST use dmcrypt for RBD Cinder to ensure on-wire encryption within the Ceph storage cluster. Additional resources See the Connection mode configuration options in the Red Hat Ceph Storage Configuration Guide for more details. 3.4. Encryption in transit Starting with Red Hat Ceph Storage 5 and later, encryption for all Ceph traffic over the network is enabled by default, with the introduction of the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, providing end-to-end encryption. You can check for encryption of the messenger v2 protocol with the ceph config dump command, netstat -Ip | grep ceph-osd command, or verify the Ceph daemon on the v2 ports. Additional resources See the SSL Termination for details on SSL termination. See the S3 server-side encryption for details on S3 API encryption. 3.5. Encryption at Rest Red Hat Ceph Storage supports encryption at rest in a few scenarios: Ceph Storage Cluster: The Ceph Storage Cluster supports Linux Unified Key Setup or LUKS encryption of Ceph OSDs and their corresponding journals, write-ahead logs, and metadata databases. In this scenario, Ceph will encrypt all data at rest irrespective of whether the client is a Ceph Block Device, Ceph Filesystem, or a custom application built on librados . Ceph Object Gateway: The Ceph storage cluster supports encryption of client objects. When the Ceph Object Gateway encrypts objects, they are encrypted independently of the Red Hat Ceph Storage cluster. Additionally, the data transmitted is between the Ceph Object Gateway and the Ceph Storage Cluster is in encrypted form. Ceph Storage Cluster Encryption The Ceph storage cluster supports encrypting data stored in Ceph OSDs. Red Hat Ceph Storage can encrypt logical volumes with lvm by specifying dmcrypt ; that is, lvm , invoked by ceph-volume , encrypts an OSD's logical volume, not its physical volume. It can encrypt non-LVM devices like partitions using the same OSD key. Encrypting logical volumes allows for more configuration flexibility. Ceph uses LUKS v1 rather than LUKS v2, because LUKS v1 has the broadest support among Linux distributions. When creating an OSD, lvm will generate a secret key and pass the key to the Ceph Monitors securely in a JSON payload via stdin . The attribute name for the encryption key is dmcrypt_key . Important System administrators must explicitly enable encryption. By default, Ceph does not encrypt data stored in Ceph OSDs. System administrators must enable dmcrypt to encrypt data stored in Ceph OSDs. When using a Ceph Orchestrator service specification file for adding Ceph OSDs to the storage cluster, set the following option in the file to encrypt Ceph OSDs: Example Note LUKS and dmcrypt only address encryption for data at rest, not encryption for data in transit. Ceph Object Gateway Encryption The Ceph Object Gateway supports encryption with customer-provided keys using its S3 API. When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Additional Resources See S3 API server-side encryption in the Red Hat Ceph Storage Developer Guide for details. | [
"encrypted: true"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/data_security_and_hardening_guide/assembly-encryption-and-key-management |
Chapter 4. Hardening Your System with Tools and Services | Chapter 4. Hardening Your System with Tools and Services 4.1. Desktop Security Red Hat Enterprise Linux 7 offers several ways for hardening the desktop against attacks and preventing unauthorized accesses. This section describes recommended practices for user passwords, session and account locking, and safe handling of removable media. 4.1.1. Password Security Passwords are the primary method that Red Hat Enterprise Linux 7 uses to verify a user's identity. This is why password security is so important for protection of the user, the workstation, and the network. For security purposes, the installation program configures the system to use Secure Hash Algorithm 512 ( SHA512 ) and shadow passwords. It is highly recommended that you do not alter these settings. If shadow passwords are deselected during installation, all passwords are stored as a one-way hash in the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd file to his own machine and run any number of password cracking programs against it. If there is an insecure password in the file, it is only a matter of time before the password cracker discovers it. Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow , which is readable only by the root user. This forces a potential attacker to attempt password cracking remotely by logging into a network service on the machine, such as SSH or FTP. This sort of brute-force attack is much slower and leaves an obvious trail as hundreds of failed login attempts are written to system files. Of course, if the cracker starts an attack in the middle of the night on a system with weak passwords, the cracker may have gained access before dawn and edited the log files to cover his tracks. In addition to format and storage considerations is the issue of content. The single most important thing a user can do to protect his account against a password cracking attack is create a strong password. Note Red Hat recommends using a central authentication solution, such as Red Hat Identity Management (IdM). Using a central solution is preferred over using local passwords. For details, see: Introduction to Red Hat Identity Management Defining Password Policies 4.1.1.1. Creating Strong Passwords When creating a secure password, the user must remember that long passwords are stronger than short and complex ones. It is not a good idea to create a password of just eight characters, even if it contains digits, special characters and uppercase letters. Password cracking tools, such as John The Ripper, are optimized for breaking such passwords, which are also hard to remember by a person. In information theory, entropy is the level of uncertainty associated with a random variable and is presented in bits. The higher the entropy value, the more secure the password is. According to NIST SP 800-63-1, passwords that are not present in a dictionary comprised of 50000 commonly selected passwords should have at least 10 bits of entropy. As such, a password that consists of four random words contains around 40 bits of entropy. A long password consisting of multiple words for added security is also called a passphrase , for example: If the system enforces the use of uppercase letters, digits, or special characters, the passphrase that follows the above recommendation can be modified in a simple way, for example by changing the first character to uppercase and appending " 1! ". Note that such a modification does not increase the security of the passphrase significantly. Another way to create a password yourself is using a password generator. The pwmake is a command-line tool for generating random passwords that consist of all four groups of characters - uppercase, lowercase, digits and special characters. The utility allows you to specify the number of entropy bits that are used to generate the password. The entropy is pulled from /dev/urandom . The minimum number of bits you can specify is 56, which is enough for passwords on systems and services where brute force attacks are rare. 64 bits is adequate for applications where the attacker does not have direct access to the password hash file. For situations when the attacker might obtain the direct access to the password hash or the password is used as an encryption key, 80 to 128 bits should be used. If you specify an invalid number of entropy bits, pwmake will use the default of bits. To create a password of 128 bits, enter the following command: While there are different approaches to creating a secure password, always avoid the following bad practices: Using a single dictionary word, a word in a foreign language, an inverted word, or only numbers. Using less than 10 characters for a password or passphrase. Using a sequence of keys from the keyboard layout. Writing down your passwords. Using personal information in a password, such as birth dates, anniversaries, family member names, or pet names. Using the same passphrase or password on multiple machines. While creating secure passwords is imperative, managing them properly is also important, especially for system administrators within larger organizations. The following section details good practices for creating and managing user passwords within an organization. 4.1.1.2. Forcing Strong Passwords If an organization has a large number of users, the system administrators have two basic options available to force the use of strong passwords. They can create passwords for the user, or they can let users create their own passwords while verifying the passwords are of adequate strength. Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down, thus exposing them. For these reasons, most system administrators prefer to have the users create their own passwords, but actively verify that these passwords are strong enough. In some cases, administrators may force users to change their passwords periodically through password aging. When users are asked to create or change passwords, they can use the passwd command-line utility, which is PAM -aware ( Pluggable Authentication Modules ) and checks to see if the password is too short or otherwise easy to crack. This checking is performed by the pam_pwquality.so PAM module. Note In Red Hat Enterprise Linux 7, the pam_pwquality PAM module replaced pam_cracklib , which was used in Red Hat Enterprise Linux 6 as a default module for password quality checking. It uses the same back end as pam_cracklib . The pam_pwquality module is used to check a password's strength against a set of rules. Its procedure consists of two steps: first it checks if the provided password is found in a dictionary. If not, it continues with a number of additional checks. pam_pwquality is stacked alongside other PAM modules in the password component of the /etc/pam.d/passwd file, and the custom set of rules is specified in the /etc/security/pwquality.conf configuration file. For a complete list of these checks, see the pwquality.conf (8) manual page. Example 4.1. Configuring password strength-checking in pwquality.conf To enable using pam_quality , add the following line to the password stack in the /etc/pam.d/passwd file: Options for the checks are specified one per line. For example, to require a password with a minimum length of 8 characters, including all four classes of characters, add the following lines to the /etc/security/pwquality.conf file: To set a password strength-check for character sequences and same consecutive characters, add the following lines to /etc/security/pwquality.conf : In this example, the password entered cannot contain more than 3 characters in a monotonic sequence, such as abcd , and more than 3 identical consecutive characters, such as 1111 . Note As the root user is the one who enforces the rules for password creation, they can set any password for themselves or for a regular user, despite the warning messages. 4.1.1.3. Configuring Password Aging Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a specified period (usually 90 days), the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down. To specify password aging under Red Hat Enterprise Linux 7, make use of the chage command. Important In Red Hat Enterprise Linux 7, shadow passwords are enabled by default. For more information, see the Red Hat Enterprise Linux 7 System Administrator's Guide . The -M option of the chage command specifies the maximum number of days the password is valid. For example, to set a user's password to expire in 90 days, use the following command: chage -M 90 username In the above command, replace username with the name of the user. To disable password expiration, use the value of -1 after the -M option. For more information on the options available with the chage command, see the table below. Table 4.1. chage command line options Option Description -d days Specifies the number of days since January 1, 1970 the password was changed. -E date Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. -I days Specifies the number of inactive days after the password expiration before locking the account. If the value is 0 , the account is not locked after the password expires. -l Lists current account aging settings. -m days Specify the minimum number of days after which the user must change passwords. If the value is 0 , the password does not expire. -M days Specify the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. -W days Specifies the number of days before the password expiration date to warn the user. You can also use the chage command in interactive mode to modify multiple password aging and account details. Use the following command to enter interactive mode: chage <username> The following is a sample interactive session using this command: You can configure a password to expire the first time a user logs in. This forces users to change passwords immediately. Set up an initial password. To assign a default password, enter the following command at a shell prompt as root : passwd username Warning The passwd utility has the option to set a null password. Using a null password, while convenient, is a highly insecure practice, as any third party can log in and access the system using the insecure user name. Avoid using null passwords wherever possible. If it is not possible, always make sure that the user is ready to log in before unlocking an account with a null password. Force immediate password expiration by running the following command as root : chage -d 0 username This command sets the value for the date the password was last changed to the epoch (January 1, 1970). This value forces immediate password expiration no matter what password aging policy, if any, is in place. Upon the initial log in, the user is now prompted for a new password. 4.1.2. Account Locking In Red Hat Enterprise Linux 7, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts. Limiting user login attempts serves mainly as a security measure that aims to prevent possible brute force attacks targeted to obtain a user's account password. With the pam_faillock module, failed login attempts are stored in a separate file for each user in the /var/run/faillock directory. Note The order of lines in the failed attempt log files is important. Any change in this order can lock all user accounts, including the root user account when the even_deny_root option is used. Follow these steps to configure account locking: To lock out any non-root user after three unsuccessful attempts and unlock that user after 10 minutes, add two lines to the auth section of the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. After your edits, the entire auth section in both files should look like this: Lines number 2 and 4 have been added. Add the following line to the account section of both files specified in the step: To apply account locking for the root user as well, add the even_deny_root option to the pam_faillock entries in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files: When the user john attempts to log in for the fourth time after failing to log in three times previously, his account is locked upon the fourth attempt: To prevent the system from locking users out even after multiple failed logins, add the following line just above the line where pam_faillock is called for the first time in both /etc/pam.d/system-auth and /etc/pam.d/password-auth . Also replace user1 , user2 , and user3 with the actual user names. To view the number of failed attempts per user, run, as root , the following command: To unlock a user's account, run, as root , the following command: Important Running cron jobs resets the failure counter of pam_faillock of that user that is running the cron job, and thus pam_faillock should not be configured for cron . See the Knowledge Centered Support (KCS) solution for more information. Keeping Custom Settings with authconfig When modifying authentication configuration using the authconfig utility, the system-auth and password-auth files are overwritten with the settings from the authconfig utility. This can be avoided by creating symbolic links in place of the configuration files, which authconfig recognizes and does not overwrite. In order to use custom settings in the configuration files and authconfig simultaneously, configure account locking using the following steps: Check whether the system-auth and password-auth files are already symbolic links pointing to system-auth-ac and password-auth-ac (this is the system default): If the output is similar to the following, the symbolic links are in place, and you can skip to step number 3: If the system-auth and password-auth files are not symbolic links, continue with the step. Rename the configuration files: Create configuration files with your custom settings: The /etc/pam.d/system-auth-local file should contain the following lines: The /etc/pam.d/password-auth-local file should contain the following lines: Create the following symbolic links: For more information on various pam_faillock configuration options, see the pam_faillock (8) manual page. Removing the nullok option The nullok option, which allows users to log in with a blank password if the password field in the /etc/shadow file is empty, is enabled by default. To disable the nullok option, remove the nullok string from configuration files in the /etc/pam.d/ directory, such as /etc/pam.d/system-auth or /etc/pam.d/password-auth . See the Will nullok option allow users to login without entering a password? KCS solution for more information. 4.1.3. Session Locking Users may need to leave their workstation unattended for a number of reasons during everyday operation. This could present an opportunity for an attacker to physically access the machine, especially in environments with insufficient physical security measures (see Section 1.2.1, "Physical Controls" ). Laptops are especially exposed since their mobility interferes with physical security. You can alleviate these risks by using session locking features which prevent access to the system until a correct password is entered. Note The main advantage of locking the screen instead of logging out is that a lock allows the user's processes (such as file transfers) to continue running. Logging out would stop these processes. 4.1.3.1. Locking Virtual Consoles Using vlock To lock a virtual console, use the vlock utility. Install it by entering the following command as root: After installation, you can lock any console session by using the vlock command without any additional parameters. This locks the currently active virtual console session while still allowing access to the others. To prevent access to all virtual consoles on the workstation, execute the following: In this case, vlock locks the currently active console and the -a option prevents switching to other virtual consoles. See the vlock(1) man page for additional information. 4.1.4. Enforcing Read-Only Mounting of Removable Media To enforce read-only mounting of removable media (such as USB flash disks), the administrator can use a udev rule to detect removable media and configure them to be mounted read-only using the blockdev utility. This is sufficient for enforcing read-only mounting of physical media. Using blockdev to Force Read-Only Mounting of Removable Media To force all removable media to be mounted read-only, create a new udev configuration file named, for example, 80-readonly-removables.rules in the /etc/udev/rules.d/ directory with the following content: SUBSYSTEM=="block",ATTRS{removable}=="1",RUN{program}="/sbin/blockdev --setro %N" The above udev rule ensures that any newly connected removable block (storage) device is automatically configured as read-only using the blockdev utility. Applying New udev Settings For these settings to take effect, the new udev rules need to be applied. The udev service automatically detects changes to its configuration files, but new settings are not applied to already existing devices. Only newly connected devices are affected by the new settings. Therefore, you need to unmount and unplug all connected removable media to ensure that the new settings are applied to them when they are plugged in. To force udev to re-apply all rules to already existing devices, enter the following command as root : Note that forcing udev to re-apply all rules using the above command does not affect any storage devices that are already mounted. To force udev to reload all rules (in case the new rules are not automatically detected for some reason), use the following command: | [
"randomword1 randomword2 randomword3 randomword4",
"pwmake 128",
"password required pam_pwquality.so retry=3",
"minlen = 8 minclass = 4",
"maxsequence = 3 maxrepeat = 3",
"~]# chage juan Changing the aging information for juan Enter the new value, or press ENTER for the default Minimum Password Age [0]: 10 Maximum Password Age [99999]: 90 Last Password Change (YYYY-MM-DD) [2006-08-18]: Password Expiration Warning [7]: Password Inactive [-1]: Account Expiration Date (YYYY-MM-DD) [1969-12-31]:",
"1 auth required pam_env.so 2 auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 3 auth sufficient pam_unix.so nullok try_first_pass 4 auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600 5 auth requisite pam_succeed_if.so uid >= 1000 quiet_success 6 auth required pam_deny.so",
"account required pam_faillock.so",
"auth required pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600 auth sufficient pam_unix.so nullok try_first_pass auth [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=600 account required pam_faillock.so",
"~]USD su - john Account locked due to 3 failed logins su: incorrect password",
"auth [success=1 default=ignore] pam_succeed_if.so user in user1:user2:user3",
"~]USD faillock john: When Type Source Valid 2013-03-05 11:44:14 TTY pts/0 V",
"faillock --user <username> --reset",
"~]# ls -l /etc/pam.d/{password,system}-auth",
"lrwxrwxrwx. 1 root root 16 24. Feb 09.29 /etc/pam.d/password-auth -> password-auth-ac lrwxrwxrwx. 1 root root 28 24. Feb 09.29 /etc/pam.d/system-auth -> system-auth-ac",
"~]# mv /etc/pam.d/system-auth /etc/pam.d/system-auth-ac ~]# mv /etc/pam.d/password-auth /etc/pam.d/password-auth-ac",
"~]# vi /etc/pam.d/system-auth-local",
"auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include system-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include system-auth-ac password include system-auth-ac session include system-auth-ac",
"~]# vi /etc/pam.d/password-auth-local",
"auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include password-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include password-auth-ac password include password-auth-ac session include password-auth-ac",
"~]# ln -sf /etc/pam.d/system-auth-local /etc/pam.d/system-auth ~]# ln -sf /etc/pam.d/password-auth-local /etc/pam.d/password-auth",
"~]# yum install kbd",
"vlock -a",
"SUBSYSTEM==\"block\",ATTRS{removable}==\"1\",RUN{program}=\"/sbin/blockdev --setro %N\"",
"~# udevadm trigger",
"~# udevadm control --reload"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-Hardening_Your_System_with_Tools_and_Services |
A.13. numad | A.13. numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management. Note that when numad is enabled, its behavior overrides the default behavior of automatic NUMA balancing. A.13.1. Using numad from the Command Line To use numad as an executable, just run: While numad runs, its activities are logged in /var/log/numad.log . It will run until stopped with the following command: Stopping numad does not remove the changes it has made to improve NUMA affinity. If system use changes significantly, running numad again will adjust affinity to improve performance under the new conditions. To restrict numad management to a specific process, start it with the following options. -p pid This option adds the specified pid to an explicit inclusion list. The process specified will not be managed until it meets the numad process significance threshold. -S 0 This sets the type of process scanning to 0 , which limits numad management to explicitly included processes. For further information about available numad options, refer to the numad man page: A.13.2. Using numad as a Service While numad runs as a service, it attempts to tune the system dynamically based on the current system workload. Its activities are logged in /var/log/numad.log . To start the service, run: To make the service persist across reboots, run: For further information about available numad options, refer to the numad man page: A.13.3. Pre-Placement Advice numad provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. This pre-placement advice is available regardless of whether numad is running as an executable or a service. A.13.4. Using numad with KSM If KSM is in use on a NUMA system, change the value of the /sys/kernel/mm/ksm/merge_nodes parameter to 0 to avoid merging pages across NUMA nodes. Otherwise, KSM increases remote memory accesses as it merges pages across nodes. Furthermore, kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused about the correct amounts and locations of available memory, after the KSM daemon merges many memory pages. KSM is beneficial only if you are overcommitting the memory on your system. If your system has sufficient free memory, you may achieve higher performance by turning off and disabling the KSM daemon. | [
"numad",
"numad -i 0",
"numad -S 0 -p pid",
"man numad",
"systemctl start numad.service",
"chkconfig numad on",
"man numad"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-numad |
Chapter 2. Projects | Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 2.1.1. Creating a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to create a project in your cluster. 2.1.1.1. Creating a project by using the web console You can use the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- using the web console. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Click Create Project : In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . The dashboard for your project is displayed. Optional: Select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project. If you are using the Developer perspective: Click the Project menu and select Create Project : Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: In the project dashboard, select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project. Additional resources Customizing the available cluster roles using the web console 2.1.1.2. Creating a project by using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.2. Viewing a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view a project in your cluster. 2.1.2.1. Viewing a project by using the web console You can view the projects that you have access to by using the OpenShift Container Platform web console. Procedure If you are using the Administrator perspective: Navigate to Home Projects in the navigation menu. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. Select the YAML tab to view and update the YAML configuration for the project resource. Select the Workloads tab to see workloads in the project. Select the RoleBindings tab to view and create role bindings for your project. If you are using the Developer perspective: Navigate to the Project page in the navigation menu. Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project. 2.1.2.2. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.3. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Prerequisites You have created a project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project page. Select your project from the Project menu. Select the Project Access tab. Click Add access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.4. Customizing the available cluster roles using the web console In the Developer perspective of the web console, the Project Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view. As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles object in the Console configuration resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster settings . Click the Configuration tab. From the Configuration resource list, select Console operator.openshift.io . Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , customize the list of available cluster roles for project access. The following example specifies the default admin , edit , and view roles: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view Click Save to save the changes to the Console configuration resource. Verification In the Developer perspective, navigate to the Project page. Select a project from the Project menu. Select the Project access tab. Click the menu in the Role column and verify that the available roles match the configuration that you applied to the Console resource configuration. 2.1.5. Adding to a project You can add items to your project by using the +Add page in the Developer perspective. Prerequisites You have created a project. Procedure In the Developer perspective, navigate to the +Add page. Select your project from the Project menu. Click on an item on the +Add page and then follow the workflow. Note You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field. 2.1.6. Checking the project status You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view the status of your project. 2.1.6.1. Checking project status by using the web console You can review the status of your project by using the web console. Prerequisites You have created a project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Review the project status in the Overview page. If you are using the Developer perspective: Navigate to the Project page. Select a project from the Project menu. Review the project status in the Overview page. 2.1.6.2. Checking project status by using the CLI You can review the status of your project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. Procedure Switch to your project: USD oc project <project_name> 1 1 Replace <project_name> with the name of your project. Obtain a high-level overview of the project: USD oc status 2.1.7. Deleting a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to delete a project. When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. 2.1.7.1. Deleting a project by using the web console You can delete a project by using the web console. Prerequisites You have created a project. You have the required permissions to delete the project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Click the Actions drop-down menu for the project and select Delete Project . Note The Delete Project option is not available if you do not have the required permissions to delete the project. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . If you are using the Developer perspective: Navigate to the Project page. Select the project that you want to delete from the Project menu. Click the Actions drop-down menu for the project and select Delete Project . Note If you do not have the required permissions to delete the project, the Delete Project option is not available. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . 2.1.7.2. Deleting a project by using the CLI You can delete a project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. You have the required permissions to delete the project. Procedure Delete your project: USD oc delete project <project_name> 1 1 Replace <project_name> with the name of the project that you want to delete. 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ... For example: apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. # ... After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied. | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/projects |
function::inet_get_local_port | function::inet_get_local_port Name function::inet_get_local_port - Provide local port number for a kernel socket Synopsis Arguments sock pointer to the kernel socket | [
"inet_get_local_port:long(sock:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-inet-get-local-port |
Installing Red Hat Developer Hub on OpenShift Dedicated on Google Cloud Platform | Installing Red Hat Developer Hub on OpenShift Dedicated on Google Cloud Platform Red Hat Developer Hub 1.3 Red Hat Customer Content Services | [
"global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html-single/installing_red_hat_developer_hub_on_openshift_dedicated_on_google_cloud_platform/index |
Chapter 13. Configuring trusted certificates for mTLS | Chapter 13. Configuring trusted certificates for mTLS In order to properly validate client certificates and enable certain authentication methods like two-way TLS or mTLS, you can set a trust store with all the certificates (and certificate chain) the server should be trusting. There are number of capabilities that rely on this trust store to properly authenticate clients using certificates such as Mutual TLS and X.509 Authentication. 13.1. Enabling mTLS Authentication using mTLS is disabled by default. To enable mTLS certificate handling when Red Hat build of Keycloak is the server and needs to validate certificates from requests made to Red Hat build of Keycloak endpoints, put the appropriate certificates in a truststore and use the following command to enable mTLS: bin/kc.[sh|bat] start --https-client-auth=<none|request|required> Using the value required sets up Red Hat build of Keycloak to always ask for certificates and fail if no certificate is provided in a request. By setting the value to request , Red Hat build of Keycloak will also accept requests without a certificate and only validate the correctness of a certificate if it exists. Warning The mTLS configuration and the truststore is shared by all Realms. It is not possible to configure different truststores for different Realms. Note Management interface properties are inherited from the main HTTP server, including mTLS settings. It means when mTLS is set, it is also enabled for the management interface. To override the behavior, use the https-management-client-auth property. 13.2. Using a dedicated truststore for mTLS By default, Red Hat build of Keycloak uses the System Truststore to validate certificates. See Configuring trusted certificates for details. If you need to use a dedicated truststore for mTLS, you can configure the location of this truststore by running the following command: bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file --https-trust-store-password=<value> 13.3. Additional resources 13.3.1. Using mTLS for outgoing HTTP requests Be aware that this is the basic certificate configuration for mTLS use cases where Red Hat build of Keycloak acts as server. When Red Hat build of Keycloak acts as client instead, e.g. when Red Hat build of Keycloak tries to get a token from a token endpoint of a brokered identity provider that is secured by mTLS, you need to set up the HttpClient to provide the right certificates in the keystore for the outgoing request. To configure mTLS in these scenarios, see Configuring outgoing HTTP requests . 13.3.2. Configuring X.509 Authentication For more information on how to configure X.509 Authentication, see X.509 Client Certificate User Authentication section . 13.4. Relevant options Value https-client-auth 🛠 Configures the server to require/request client authentication. CLI: --https-client-auth Env: KC_HTTPS_CLIENT_AUTH none (default), request , required https-trust-store-file The trust store which holds the certificate information of the certificates to trust. CLI: --https-trust-store-file Env: KC_HTTPS_TRUST_STORE_FILE https-trust-store-password The password of the trust store file. CLI: --https-trust-store-password Env: KC_HTTPS_TRUST_STORE_PASSWORD https-trust-store-type The type of the trust store file. If not given, the type is automatically detected based on the file extension. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-trust-store-type Env: KC_HTTPS_TRUST_STORE_TYPE https-management-client-auth 🛠 Configures the management interface to require/request client authentication. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-client-auth Env: KC_HTTPS_MANAGEMENT_CLIENT_AUTH none (default), request , required | [
"bin/kc.[sh|bat] start --https-client-auth=<none|request|required>",
"bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file --https-trust-store-password=<value>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/mutual-tls- |
Chapter 30. VDO Integration | Chapter 30. VDO Integration 30.1. Theoretical Overview of VDO Virtual Data Optimizer (VDO) is a block virtualization technology that allows you to easily create compressed and deduplicated pools of block storage. Deduplication is a technique for reducing the consumption of storage resources by eliminating multiple copies of duplicate blocks. Instead of writing the same data more than once, VDO detects each duplicate block and records it as a reference to the original block. VDO maintains a mapping from logical block addresses, which are used by the storage layer above VDO, to physical block addresses, which are used by the storage layer under VDO. After deduplication, multiple logical block addresses may be mapped to the same physical block address; these are called shared blocks . Block sharing is invisible to users of the storage, who read and write blocks as they would if VDO were not present. When a shared block is overwritten, a new physical block is allocated for storing the new block data to ensure that other logical block addresses that are mapped to the shared physical block are not modified. Compression is a data-reduction technique that works well with file formats that do not necessarily exhibit block-level redundancy, such as log files and databases. See Section 30.4.8, "Using Compression" for more detail. The VDO solution consists of the following components: kvdo A kernel module that loads into the Linux Device Mapper layer to provide a deduplicated, compressed, and thinly provisioned block storage volume uds A kernel module that communicates with the Universal Deduplication Service (UDS) index on the volume and analyzes data for duplicates. Command line tools For configuring and managing optimized storage. 30.1.1. The UDS Kernel Module ( uds ) The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly determines if that piece is identical to any previously stored piece of data. If the index finds match, the storage system can then internally reference the existing item to avoid storing the same information more than once. The UDS index runs inside the kernel as the uds kernel module. 30.1.2. The VDO Kernel Module ( kvdo ) The kvdo Linux kernel module provides block-layer deduplication services within the Linux Device Mapper layer. In the Linux kernel, Device Mapper serves as a generic framework for managing pools of block storage, allowing the insertion of block-processing modules into the storage stack between the kernel's block interface and the actual storage device drivers. The kvdo module is exposed as a block device that can be accessed directly for block storage or presented through one of the many available Linux file systems, such as XFS or ext4. When kvdo receives a request to read a (logical) block of data from a VDO volume, it maps the requested logical block to the underlying physical block and then reads and returns the requested data. When kvdo receives a request to write a block of data to a VDO volume, it first checks whether it is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions holds, kvdo updates its block map and acknowledges the request. Otherwise, a physical block is allocated for use by the request. Overview of VDO Write Policies If the kvdo module is operating in synchronous mode: It temporarily writes the data in the request to the allocated block and then acknowledges the request. Once the acknowledgment is complete, an attempt is made to deduplicate the block by computing a MurmurHash-3 signature of the block data, which is sent to the VDO index. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated block and does a byte-by-byte comparison of the two blocks to verify that they are identical. If they are indeed identical, then kvdo updates its block map so that the logical block points to the corresponding physical block and releases the allocated physical block. If the VDO index did not contain an entry for the signature of the block being written, or the indicated block does not actually contain the same data, kvdo updates its block map to make the temporary physical block permanent. If kvdo is operating in asynchronous mode: Instead of writing the data, it will immediately acknowledge the request. It will then attempt to deduplicate the block in same manner as described above. If the block turns out to be a duplicate, kvdo will update its block map and release the allocated block. Otherwise, it will write the data in the request to the allocated block and update the block map to make the physical block permanent. 30.1.3. VDO Volume VDO uses a block device as a backing store, which can include an aggregation of physical storage consisting of one or more disks, partitions, or even flat files. When a VDO volume is created by a storage management tool, VDO reserves space from the volume for both a UDS index and the VDO volume, which interact together to provide deduplicated block storage to users and applications. Figure 30.1, "VDO Disk Organization" illustrates how these pieces fit together. Figure 30.1. VDO Disk Organization Slabs The physical storage of the VDO volume is divided into a number of slabs , each of which is a contiguous region of the physical space. All of the slabs for a given volume will be of the same size, which may be any power of 2 multiple of 128 MB up to 32 GB. The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO volume may have up to 8192 slabs. Therefore, in the default configuration with 2 GB slabs, the maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical storage is 256 TB. At least one entire slab is reserved by VDO for metadata, and therefore cannot be used for storing user data. Slab size has no effect on the performance of the VDO volume. Table 30.1. Recommended VDO Slab Sizes by Physical Volume Size Physical Volume Size Recommended Slab Size 10-99 GB 1 GB 100 GB - 1 TB 2 GB 2-256 TB 32 GB The size of a slab can be controlled by providing the --vdoSlabSize= megabytes option to the vdo create command. Physical Size and Available Physical Size Both physical size and available physical size describe the amount of disk space on the block device that VDO can utilize: Physical size is the same size as the underlying block device. VDO uses this storage for: User data, which might be deduplicated and compressed VDO metadata, such as the UDS index Available physical size is the portion of the physical size that VDO is able to use for user data. It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the volume into slabs by the given slab size. For examples of how much storage VDO metadata require on block devices of different sizes, see Section 30.2.3, "Examples of VDO System Requirements by Physical Volume Size" . Logical Size If the --vdoLogicalSize option is not specified, the logical volume size defaults to the available physical volume size. Note that, in Figure 30.1, "VDO Disk Organization" , the VDO deduplicated storage target sits completely on top of the block device, meaning the physical size of the VDO volume is the same size as the underlying block device. VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute maximum logical size of 4PB. 30.1.4. Command Line Tools VDO includes the following command line tools for configuration and management: vdo Creates, configures, and controls VDO volumes vdostats Provides utilization and performance statistics | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/VDO-integration |
Chapter 64. region | Chapter 64. region This chapter describes the commands under the region command. 64.1. region create Create new region Usage: Table 64.1. Positional Arguments Value Summary <region-id> New region id Table 64.2. Optional Arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Parent region id --description <description> New region description Table 64.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 64.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 64.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.2. region delete Delete region(s) Usage: Table 64.7. Positional Arguments Value Summary <region-id> Region id(s) to delete Table 64.8. Optional Arguments Value Summary -h, --help Show this help message and exit 64.3. region list List regions Usage: Table 64.9. Optional Arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Filter by parent region id Table 64.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 64.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 64.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 64.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.4. region set Set region properties Usage: Table 64.14. Positional Arguments Value Summary <region-id> Region to modify Table 64.15. Optional Arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> New parent region id --description <description> New region description 64.5. region show Display region details Usage: Table 64.16. Positional Arguments Value Summary <region-id> Region to display Table 64.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 64.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 64.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 64.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack region create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--parent-region <region-id>] [--description <description>] <region-id>",
"openstack region delete [-h] <region-id> [<region-id> ...]",
"openstack region list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--parent-region <region-id>]",
"openstack region set [-h] [--parent-region <region-id>] [--description <description>] <region-id>",
"openstack region show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <region-id>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/region |
Chapter 2. Using Ansible roles to automate repetitive tasks on clients | Chapter 2. Using Ansible roles to automate repetitive tasks on clients 2.1. Assigning Ansible roles to an existing host You can use Ansible roles for remote management of Satellite clients. Prerequisites Ensure that you have configured and imported Ansible roles. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host. You can add more than one role. Click Submit . After you assign Ansible roles to hosts, you can use Ansible for remote execution. For more information, see Section 4.13, "Distributing SSH keys for remote execution" . Overriding parameter variables On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Ansible Playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . 2.2. Removing Ansible roles from a host Use the following procedure to remove Ansible roles from a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . Select the Ansible Roles tab. In the Assigned Ansible Roles area, click the - icon to remove the role from the host. Repeat to remove more roles. Click Submit . 2.3. Changing the order of Ansible roles Use the following procedure to change the order of Ansible roles applied to a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. Select the Ansible Roles tab. In the Assigned Ansible Roles area, you can change the order of the roles by dragging and dropping the roles into the preferred position. Click Submit to save the order of the Ansible roles. 2.4. Running Ansible roles on a host You can run Ansible roles on a host through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host that contains the Ansible role you want to run. From the Select Action list, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. To rerun a job, click Rerun . 2.5. Assigning Ansible roles to a host group You can use Ansible roles for remote management of Satellite clients. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . Procedure In the Satellite web UI, navigate to Configure > Host Groups . Click the host group name to which you want to assign an Ansible role. On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host group. You can add more than one role. Click Submit . 2.6. Running Ansible roles on a host group You can run Ansible roles on a host group through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Configure > Host Groups . From the list in the Actions column for the host group, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. Click Rerun to rerun a job. 2.7. Running Ansible roles in check mode You can run Ansible roles in check mode through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit for the host you want to enable check mode for. In the Parameters tab, ensure that the host has a parameter named ansible_roles_check_mode with type boolean set to true . Click Submit . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/using_ansible_roles_to_automate_repetitive_tasks_on_clients_ansible |
Chapter 6. Adding Storage for Red Hat Virtualization | Chapter 6. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 6.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 6.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally to the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 6.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 6.4. Adding POSIX-compliant File System Storage This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name for the storage domain. Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS) . Alternatively, select (none) . Select Data from the Domain Function drop-down list, and POSIX compliant FS from the Storage Type drop-down list. If applicable, select the Format from the drop-down menu. Select a host from the Host drop-down list. Enter the Path to the POSIX file system, as you would normally provide it to the mount command. Enter the VFS Type , as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types. Enter additional Mount Options , as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . 6.5. Adding Local Storage Adding local storage to a host places the host in a new data center and cluster. The local storage configuration window combines the creation of a data center, a cluster, and storage into a single process. Procedure Click Compute Hosts and select the host. Click Management Maintenance and click OK . Click Management Configure Local Storage . Click the Edit buttons to the Data Center , Cluster , and Storage fields to configure and name the local storage domain. Set the path to your local storage in the text entry field. If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster. Click OK . Your host comes online in a data center of its own. 6.6. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/adding_storage_domains_to_rhv_sm_remotedb_deploy |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . AMQ Clients is a suite of AMQP 1.0 and JMS clients, adapters, and libraries. It includes JMS 2.0 support and new, event-driven APIs to enable integration into existing applications. AMQ Clients is part of Red Hat AMQ. For more information, see Introducing Red Hat AMQ 7 . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_overview/making-open-source-more-inclusive |
Getting started | Getting started Red Hat OpenShift Service on AWS 4 Setting up clusters and accounts Red Hat OpenShift Documentation Team | [
"aws sts get-caller-identity --output text",
"<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>",
"tar xvf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.47 Your ROSA CLI is up to date.",
"rosa login",
"To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:",
"? Copy the token and paste it here: ******************* [full token length omitted]",
"rosa whoami",
"AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id>",
"rosa download openshift-client",
"tar xvf openshift-client-linux.tar.gz",
"sudo mv oc /usr/local/bin/oc",
"rosa verify openshift-client",
"I: Verifying whether OpenShift command-line tool is available I: Current OpenShift Client Version: 4.17.3",
"rosa create ocm-role",
"rosa create user-role",
"rosa create account-roles",
"rosa create admin --cluster=<cluster_name> 1",
"W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active.",
"rosa create idp --cluster=<cluster_name> --interactive 1",
"I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application'",
"? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1.",
"rosa list idps --cluster=<cluster_name>",
"NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1",
"rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> cluster-admins",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> dedicated-admins",
"rosa describe cluster -c <cluster_name> | grep Console 1",
"Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com",
"https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/",
"Welcome to your Node.js application on OpenShift",
"rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa delete cluster --cluster=<cluster_name> --watch",
"rosa delete oidc-provider -c <cluster_id> --mode auto 1",
"rosa delete operator-roles -c <cluster_id> --mode auto 1",
"rosa delete account-roles --prefix <prefix> --mode auto 1",
"aws sts get-caller-identity --output text",
"<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>",
"tar xvf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.47 Your ROSA CLI is up to date.",
"rosa login",
"To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:",
"? Copy the token and paste it here: ******************* [full token length omitted]",
"rosa whoami",
"AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id>",
"rosa download openshift-client",
"tar xvf openshift-client-linux.tar.gz",
"sudo mv oc /usr/local/bin/oc",
"rosa verify openshift-client",
"I: Verifying whether OpenShift command-line tool is available I: Current OpenShift Client Version: 4.17.3",
"rosa create admin --cluster=<cluster_name> 1",
"W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active.",
"oc login <api_url> --username cluster-admin --password <cluster_admin_password> 1",
"oc whoami",
"cluster-admin",
"rosa create idp --cluster=<cluster_name> --interactive 1",
"I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application'",
"? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1.",
"rosa list idps --cluster=<cluster_name>",
"NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1",
"rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> cluster-admins",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> dedicated-admins",
"rosa describe cluster -c <cluster_name> | grep Console 1",
"Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com",
"https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/",
"Welcome to your Node.js application on OpenShift",
"rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa delete cluster --cluster=<cluster_name> --watch",
"rosa delete oidc-provider -c <cluster_id> --mode auto 1",
"rosa delete operator-roles -c <cluster_id> --mode auto 1",
"rosa delete account-roles --prefix <prefix> --mode auto 1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/getting_started/index |
Chapter 2. Understanding AMQ Broker | Chapter 2. Understanding AMQ Broker AMQ Broker enables you to loosely couple heterogeneous systems together, while providing reliability, transactions, and many other features. Before using AMQ Broker, you should understand the capabilities it offers. 2.1. Broker instances In AMQ Broker, the installed AMQ Broker software serves as a "home" for one or more broker instances . This architecture provides several benefits, such as: You can create as many broker instances as you require from a single AMQ Broker installation. The AMQ Broker installation contains the necessary binaries and resources that each broker instance needs to run. These resources are then shared between the broker instances. When upgrading to a new version of AMQ Broker, you only need to update the software once, even if you are running multiple broker instances on that host. You can think of a broker instance as a message broker. Each broker instance has its own directory containing its unique configuration and runtime data. This runtime data consists of logs and data files, and is associated with a unique broker process on the host. 2.2. Message persistence AMQ Broker persists message data to ensure that messages are never lost, even if the broker fails or shuts down unexpectedly. AMQ Broker provides two options for message persistence: journal-based persistence and database persistence. Journal-based persistence The default method, this option writes message data to message journal files stored on the file system. Initially, each of these journal files is created automatically with a fixed size and filled with empty data. As clients perform various broker operations, records are appended to the journal. When one of the journal files is full, the broker moves to the journal file. Journal-based persistence supports transactional operations, including both local and XA transactions. Journal-based persistence requires an IO interface to the file system. AMQ Broker supports the following: Linux Asynchronous IO (AIO) AIO typically provides the best performance, but it requires the following: Linux Kernel version 2.6 or later Supported file system To see the currently supported file systems, see Red Hat AMQ 7 Supported Configurations . Java NIO Java NIO provides good performance, and it can run on any platform with a Java 6 or later runtime. Database persistence This option stores message and bindings data in a database by using Java Database Connectivity (JDBC). This option is a good choice if you already have a reliable and high performing database platform in your environment, or if using a database is mandated by company policy. The broker JDBC persistence store uses a standard JDBC driver to create a JDBC connection that stores message and bindings data in database tables. The data in the database tables is encoded using the same encoding as journal-based persistence. This means that messages stored in the database are not human-readable if accessed directly using SQL. To use database persistence, you must use a supported database platform. To see the currently supported database platforms, see Red Hat AMQ 7 Supported Configurations . 2.3. Resource consumption AMQ Broker provides a number of options to limit memory and resource consumption on the broker. Resource limits You can set connection and queue limits for each user. This prevents users from consuming too many of the broker's resources and causing degraded performance for other users. Message paging Message paging enables AMQ Broker to support large queues containing millions of messages while also running with a limited amount of memory. When the broker receives a surge of messages that exceeds its memory capacity, it begins paging messages to disk. This paging process is transparent; the broker pages messages into and out of memory as needed. Message paging is address-based. When the size of all messages in memory for an address exceeds the maximum size, each additional message for the address will be paged to the address's page file. Large messages With AMQ Broker, you can send and receive huge messages, even when running with limited memory resources. To avoid the overhead of storing large messages in memory, you can configure AMQ Broker to store these large messages in the file system or in a database table. 2.4. Monitoring and management AMQ Broker provides several tools you can use to monitor and manage your brokers. AMQ Management Console AMQ Management Console is a web interface accessible through a web browser. You can use to monitor network health, view broker topology, and create and delete broker resources. CLI AMQ Broker provides the artemis CLI, which you can use to administer your brokers. Using the CLI, you can create, start, and stop broker instances. The CLI also provides several commands for managing the message journal. Management API AMQ Broker provides an extensive management API. You can use it to modify a broker's configuration, create new resources, inspect these resources, and interact with them. Clients can also use the management API to manage the broker and subscribe to management notifications. AMQ Broker provides the following methods for using the management API: Java Management Extensions (JMX) - JMX is a standard technology for managing Java applications. The broker's management operations are exposed through AMQ MBeans interfaces. JMS API - Management operations are sent using standard JMS messages to a special management JMS queue. Logs Each broker instance logs error messages, warnings, and other broker-related information and activities. You can configure the logging levels, the location of the log files, and log format. You can then use the resulting log files to monitor the broker and diagnose error conditions. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/getting_started_with_amq_broker/understanding-getting-started |
Chapter 1. Overview of model registries | Chapter 1. Overview of model registries Important Model registry is currently available in Red Hat OpenShift AI 2.18 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . A model registry is an important component in the lifecycle of an artificial intelligence/machine learning (AI/ML) model, and a vital part of any machine learning operations (MLOps) platform or ML workflow. A model registry acts as a central repository, holding metadata related to machine learning models from inception to deployment. This metadata ranges from high-level information like the deployment environment and project origins, to intricate details like training hyperparameters, performance metrics, and deployment events. A model registry acts as a bridge between model experimentation and serving, offering a secure, collaborative metadata store interface for stakeholders of the ML lifecycle. Model registries provide a structured and organized way to store, share, version, deploy, and track models. To use model registries in OpenShift AI, an OpenShift cluster administrator must configure the model registry component. For more information, see Configuring the model registry component . After the model registry component is configured, an OpenShift AI administrator can create model registries in OpenShift AI and grant model registry access to the data scientists that will work with them. For more information, see Managing model registries . Data scientists with access to a model registry can store, share, version, deploy, and track models using the model registry feature. For more information, see Working with model registries . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/configuring_the_model_registry_component/overview-of-model-registries_model-registry-config |
14.3.2. Domain Member Server | 14.3.2. Domain Member Server A domain member, while similar to a stand-alone server, is logged into a domain controller (either Windows or Samba) and is subject to the domain's security rules. An example of a domain member server would be a departmental server running Samba that has a machine account on the Primary Domain Controller (PDC). All of the department's clients still authenticate with the PDC, and desktop profiles and all network policy files are included. The difference is that the departmental server has the ability to control printer and network shares. 14.3.2.1. Active Directory Domain Member Server The following smb.conf file shows a sample configuration needed to implement an Active Directory domain member server. In this example, Samba authenticates users for services being run locally but is also a client of the Active Directory. Ensure that your kerberos realm parameter is shown in all caps (for example realm = EXAMPLE.COM ). Since Windows 2000/2003 requires Kerberos for Active Directory authentication, the realm directive is required. If Active Directory and Kerberos are running on different servers, the password server directive may be required to help the distinction. In order to join a member server to an Active Directory domain, the following steps must be completed: Configuration of the smb.conf file on the member server Configuration of Kerberos, including the /etc/krb5.conf file, on the member server Creation of the machine account on the Active Directory domain server Association of the member server to the Active Directory domain To create the machine account and join the Windows 2000/2003 Active Directory, Kerberos must first be initialized for the member server wishing to join the Active Directory domain. To create an administrative Kerberos ticket, type the following command as root on the member server: The kinit command is a Kerberos initialization script that references the Active Directory administrator account and Kerberos realm. Since Active Directory requires Kerberos tickets, kinit obtains and caches a Kerberos ticket-granting ticket for client/server authentication. For more information on Kerberos, the /etc/krb5.conf file, and the kinit command, refer to Chapter 19, Kerberos . To join an Active Directory server (windows1.example.com), type the following command as root on the member server: Since the machine windows1 was automatically found in the corresponding Kerberos realm (the kinit command succeeded), the net command connects to the Active Directory server using its required administrator account and password. This creates the appropriate machine account on the Active Directory and grants permissions to the Samba domain member server to join the domain. Note Since security = ads and not security = user is used, a local password backend such as smbpasswd is not needed. Older clients that do not support security = ads are authenticated as if security = domain had been set. This change does not affect functionality and allows local users not previously in the domain. | [
"[global] realm = EXAMPLE.COM security = ADS encrypt passwords = yes Optional. Use only if Samba cannot determine the Kerberos server automatically. password server = kerberos.example.com",
"kinit [email protected]",
"net ads join -S windows1.example.com -U administrator%password"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-domain-member |
2.11. Thin Provisioning and Storage Over-Commitment | 2.11. Thin Provisioning and Storage Over-Commitment The Red Hat Virtualization Manager provides provisioning policies to optimize storage usage within the virtualization environment. A thin provisioning policy allows you to over-commit storage resources, provisioning storage based on the actual storage usage of your virtualization environment. Storage over-commitment is the allocation of more storage to virtual machines than is physically available in the storage pool. Generally, virtual machines use less storage than what has been allocated to them. Thin provisioning allows a virtual machine to operate as if the storage defined for it has been completely allocated, when in fact only a fraction of the storage has been allocated. Note While the Red Hat Virtualization Manager provides its own thin provisioning function, you should use the thin provisioning functionality of your storage back-end if it provides one. To support storage over-commitment, VDSM defines a threshold which compares logical storage allocation with actual storage usage. This threshold is used to make sure that the data written to a disk image is smaller than the logical volume that backs the disk image. QEMU identifies the highest offset written to in a logical volume, which indicates the point of greatest storage use. VDSM monitors the highest offset marked by QEMU to ensure that the usage does not cross the defined threshold. So long as VDSM continues to indicate that the highest offset remains below the threshold, the Red Hat Virtualization Manager knows that the logical volume in question has sufficient storage to continue operations. When QEMU indicates that usage has risen to exceed the threshold limit, VDSM communicates to the Manager that the disk image will soon reach the size of it's logical volume. The Red Hat Virtualization Manager requests that the SPM host extend the logical volume. This process can be repeated as long as the data storage domain for the data center has available space. When the data storage domain runs out of available free space, you must manually add storage capacity to expand it. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/over-commitment |
Developing Applications with Red Hat build of Apache Camel for Quarkus | Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.8 Developing Applications with Red Hat build of Apache Camel for Quarkus | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/index |
Examples | Examples Red Hat Service Interconnect 1.8 Service network tutorials with the CLI and YAML | null | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/index |
Chapter 4. RHEL 8.3.1 release | Chapter 4. RHEL 8.3.1 release Red Hat makes Red Hat Enterprise Linux 8 content available quarterly, in between minor releases (8.Y). The quarterly releases are numbered using the third digit (8.Y.1). The new features in the RHEL 8.3.1 release are described below. 4.1. New features Flatpak packages for several desktop applications Flatpak is a system for running graphical applications as containers. Using Flatpak, you can install and update an application independently of the host operating system. This update provides Flatpak container images of the following applications in the Red Hat Container Catalog: Application name Flatpak container ID Firefox org.mozilla.firefox GIMP org.gimp.GIMP Inkscape org.inkscape.Inkscape Thunderbird org.mozilla.Thunderbird To install Flatpak containers available in the Red Hat Container Catalog, use the following procedure: Make sure that the latest version of the Flatpak client is installed on your system: Enable the RHEL Flatpak repository: Provide the credentials for your RHEL account: By default, Podman saves the credentials only until the user logs out. Optional: Save your credentials permanently: Install the Flatpak container image: (JIRA:RHELPLAN-30958, BZ#1920689 , BZ#1921179 , BZ#1921802 , BZ#1916412 , BZ#1921812 , BZ#1920604 ) Rust Toolset rebased to version 1.47.0 Rust Toolset has been updated to version 1.47.0. Notable changes include: The compile-time evaluated functions const fn have been improved and can now use control flow features, for example if , while , and match . The new #[track_caller] annotation can now be put on functions. Panics from annotated functions report the caller as the source. The Rust Standard Library now generically implements traits for arrays of any length. Previously, many of the trait implementations for arrays were only filled for lengths between 0 and 32. For detailed instructions regarding usage, see Using Rust Toolset . (BZ#1883839) The Logging System Role now supports property-based filter on its outputs With this update, property-based filters have been added to the files output, the forwards output, and the remote_files output of the Logging System Role. The feature is provided by underlying the rsyslog sub-role, and is configurable via the Logging RHEL System Role. As a result, users can benefit from the ability of filtering log messages by the properties, such as hostname, tag, and the message itself is useful to manage logs. (BZ#1889492) The Logging RHEL System Role now supports rsyslog behavior With this enhancement, rsyslog receives the message from Red Hat Virtualization and forwards the message to the elasticsearch . (BZ#1889893) The ubi8/pause container image is now available Podman now uses the ubi8/pause instead of the k8s.gcr.io/pause container image to hold the network namespace information of the pod. (BZ#1690785) Podman rebased to version 2.1 The Podman utility has been updated to version 2.1. Notable enhancements include: Changes: Updated Podman to 2.2.1 (from 2.0.5), Buildah to 1.19 (from 1.15.1), Skopeo to 1.2.1 (from 1.1.1), Udica to 0.2.3 (from 0.2.2), and CRIU to 3.15 (0.3.4) Docker-compatible volume API endpoints (Create, Inspect, List, Remove, Prune) are now available Added an API endpoint for generating systemd unit files for containers The podman play kube command now features support for setting CPU and Memory limits for containers The podman play kube command now supports persistent volumes claims using Podman named volumes The podman play kube command now supports Kubernetes configmaps via the --configmap option Experimental support for shortname aliasing has been added. This is not enabled by default, but can be turned on by setting the environment variable CONTAINERS_SHORT_NAME_ALIASING to on. For more information see Container image short names in Podman . The new podman image command has been added. This allows for an image to be mounted, read-only, to inspect its contents without creating a container from it. The podman save and podman load commands can now create and load archives containing multiple images. Podman will now retry pulling an image at most 3 times if a pull fails due to network errors. Bug Fixes: Fixed a bug where running systemd in a container on a cgroups v1 system would fail. The Buildah tool has been updated to version 1.19. Notable enhancements include: Changes: The buildah inspect command supports inspecting manifests The buildah push command supports pushing manifests lists and digests Added support for --manifest flags The --arch and --os and --variant options has beed added to select architecture and OS Allow users to specify stdin into containers Allow FROM to be overridden with --from option Added --ignorefile flag to use alternate .dockerignore flags short-names aliasing Added --policy option to buildah pull command Fix buildah mount command to display container names not IDs Improved buildah completions Use --timestamp rather then --omit-timestamp flag Use pipes for copying Added --omit-timestamp flag to buildah bud command Add VFS additional image store to container Allow "readonly" as alias to "ro" in mount options buildah, bud: support --jobs=N option for parallel execution The Skopeo tool has been updated to version 1.2.1. Notable enhancements include: Changes: Add multi-arch builds for upstream and stable skopeo image via Travis Added support for digests in sync Added --all sync flag to emulate copy --all Added --format option to skopeo inspect command The Udica tool has been updated to version 0.2.3. Notable enhancements include: Changes: Enable container port, not the host port Add --version option The CRIU tool has been updated to version 3.15. Notable enhancements include: Changes: Initial cgroup2 support Legalized swrk API and add the ability for inheriting fds via it External bind mounts and tasks-to-cgroups bindings ibcriu.so (RPC wrapper) and plugins (JIRA:RHELPLAN-55998) | [
"yum update flatpak",
"flatpak remote-add rhel https://flatpaks.redhat.io/rhel.flatpakrepo",
"podman login registry.redhat.io",
"cp USDXDG_RUNTIME_DIR/containers/auth.json USDHOME/.config/flatpak/oci-auth.json",
"flatpak install rhel container-id"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.3_release_notes/rhel-8_3_1_release |
Managing and allocating storage resources | Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.18 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 5, Block pools provides you with information on how to create, update and delete block pools. Chapter 6, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 8, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 10, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 11, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 12, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 14, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 15, Volume cloning shows you how to create volume clones. Chapter 16, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.2.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name. Chapter 3. Persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 3.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 3.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 3.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encryption, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 3.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 3.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 3.3. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 3.3.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 3.3.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. Chapter 4. Enabling and disabling encryption in-transit post deployment You can enable encryption in-transit for the existing clusters after the deployment of clusters both in internal and external modes. 4.1. Enabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true to the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.2. Disabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled. Procedure Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.3. Enabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true the storage cluster spec: Check the connection settings in the CR. 4.3.1. Applying encryption in-transit on Red Hat Ceph Storage cluster Procedure Apply Encryption in-transit settings. Check the settings. Restart all Ceph daemons. Wait for the restarting of all the daemons. 4.3.2. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. 4.4. Disabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled for the external mode cluster. Procedure Removing encryption in-transit settings from Red Hat Ceph Storage cluster Remove and check encryption in-transit configurations. Restart all Ceph daemons. Patching the CR Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Remount existing volumes Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. Chapter 5. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 5.1. Managing block pools in internal mode With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. 5.1.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click Create storage pool . Select Volume type as Block . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 5.1.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Storage pools . Click the Action Menu (...) at the end the pool you want to update. Click Edit storage pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 5.1.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Storage Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. Note When a pool is deleted, the underlying Ceph pool is not deleted. Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 6.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 6.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 7. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 7.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 9. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads -> Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads -> Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. Chapter 10. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 10.3, "Manual creation of infrastructure nodes" section for more information. 10.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 10.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 10.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 10.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 11. Managing Persistent Volume Claims 11.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 11.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 11.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 11.4. Expanding Persistent Volume Claims OpenShift Data Foundation 4.6 onwards has the ability to expand Persistent Volume Claims providing more flexibility in the management of persistent storage resources. Expansion is supported for the following Persistent Volumes: PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph File System (CephFS) for volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Block . PVC with ReadWriteOncePod (RWOP) that is based on Ceph File System (CephFS) or Network File System (NFS) for volume mode Filesystem . PVC with ReadWriteOncePod (RWOP) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . With RWOP access mode, you mount the volume as read-write by a single pod on a single node. Note PVC expansion is not supported for OSD, MON and encrypted PVCs. Prerequisites Administrator access to OpenShift Web Console. Procedure In OpenShift Web Console, navigate to Storage -> Persistent Volume Claims . Click the Action Menu (...) to the Persistent Volume Claim you want to expand. Click Expand PVC : Select the new size of the Persistent Volume Claim, then click Expand : To verify the expansion, navigate to the PVC's details page and verify the Capacity field has the correct size requested. Note When expanding PVCs based on Ceph RADOS Block Devices (RBDs), if the PVC is not already attached to a pod the Condition type is FileSystemResizePending in the PVC's details page. Once the volume is mounted, filesystem resize succeeds and the new size is reflected in the Capacity field. 11.5. Dynamic provisioning 11.5.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 11.5.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 11.5.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 12. Reclaiming space on target volumes The deleted files or chunks of zero data sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is used on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is used when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note Only the Ceph RBD volumes support the reclaim space operation. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 12.1. Enabling reclaim space operation by annotating PersistentVolumeClaims Use this procedure to automatically invoke the reclaim space operation to annotate persistent volume claim (PVC) based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Do not schedule multiple ReclaimSpace operations @weekly or at the same time. Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the PVC details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 12.2. Disabling reclaim space for a specific PersistentVolumeClaim To disable reclaim space for a specific PersistentVolumeClaim (PVC), modify the associated ReclaimSpaceCronJob custom resource (CR). Identify the ReclaimSpaceCronJob CR for the PVC you want to disable reclaim space on: Replace "<PVC_NAME>" with the name of the PVC. Apply the following to the ReclaimSpaceCronJob CR from step 1 to disable the reclaim space: Update the csiaddons.openshift.io/state annotation from "managed" to "unmanaged" Replace <RECLAIMSPACECRONJOB_NAME> with the ReclaimSpaceCronJob CR. Add suspend: true under the spec field: 12.3. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 12.4. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 12.5. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. ' :leveloffset: +1 Chapter 13. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output: Chapter 14. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 14.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 14.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 14.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 15. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 15.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 16. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. Chapter 17. Using 2-way replication with CephFS To reduce storage overhead with CephFS when data resiliency is not a primary concern, you can opt for using 2-way replication (replica-2). This reduces the amount of storage space used and decreases the level of fault tolerance. There are two ways to use replica-2 for CephFS: Edit the existing default pool to replica-2 and use it with the default CephFS storageclass . Add an additional CephFS data pool with replica-2 . 17.1. Editing the existing default CephFS data pool to replica-2 Use this procedure to edit the existing default CephFS pool to replica-2 and use it with the default CephFS storageclass. Procedure Patch the storagecluster to change default CephFS data pool to replica-2. Check the pool details. 17.2. Adding an additional CephFS data pool with replica-2 Use this procedure to add an additional CephFS data pool with replica-2. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses -> Create Storage Class . Select CephFS Provisioner . Under Storage Pool , click Create new storage pool . Fill in the Create Storage Pool fields. Under Data protection policy , select 2-way Replication . Confirm Storage Pool creation In the Storage Class creation form, choose the newly created Storage Pool. Confirm the Storage Class creation. Verification Click Storage -> Data Foundation . In the Storage systems tab, select the new storage system. The Details tab of the storage system reflect the correct volume and device types you chose during creation Chapter 18. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 18.1, "Enabling the NFS feature" Section 18.2, "Creating NFS exports" Section 18.3, "Consuming NFS exports in-cluster" Section 18.4, "Consuming NFS exports externally from the OpenShift cluster" 18.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 18.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage -> Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 18.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 18.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads -> Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 18.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. Chapter 19. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 20. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile: Chapter 21. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 21.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 21.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 : | [
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'",
"oc get storagecluster",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m",
"oc get pods | grep osd",
"rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready",
"oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'",
"failed_osd_id=0 #replace with the ID of the failed OSD",
"failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')",
"failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"",
"replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"",
"toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage",
"ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it",
"oc delete pod -l rook-ceph-operator -n openshift-storage",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF",
"apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>",
"apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }",
"--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: true",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"~ USD oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: false",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 9h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: true",
"root@ceph-client ~]# ceph config set global ms_client_mode secure ceph config set global ms_cluster_mode secure ceph config set global ms_service_mode secure ceph config set global rbd_default_map_options ms_mode=secure",
"ceph config dump | grep ms_ ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph config rm global ms_client_mode ceph config rm global ms_cluster_mode ceph config rm global ms_service_mode ceph config rm global rbd_default_map_options ceph config dump | grep ms_",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (116s) 9s ago 10h 19.5M - 0.26.0 7dbf12091920 4694a72d4bbd ceph-exporter.osd-0 osd-0 running (19s) 9s ago 10h 7310k - 18.2.1-229.el9cp 3fd804e38f5b 49bdc7d99471 ceph-exporter.osd-1 osd-1 running (97s) 26s ago 10h 7285k - 18.2.1-229.el9cp 3fd804e38f5b 7000d59d23b4 ceph-exporter.osd-2 osd-2 running (76s) 26s ago 10h 7306k - 18.2.1-229.el9cp 3fd804e38f5b 3907515cc352 ceph-exporter.osd-3 osd-3 running (49s) 26s ago 10h 6971k - 18.2.1-229.el9cp 3fd804e38f5b 3f3952490780 crash.osd-0 osd-0 running (17s) 9s ago 10h 6878k - 18.2.1-229.el9cp 3fd804e38f5b 38e041fb86e3 crash.osd-1 osd-1 running (96s) 26s ago 10h 6895k - 18.2.1-229.el9cp 3fd804e38f5b 21ce3ef7d896 crash.osd-2 osd-2 running (74s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 210ca9c8d928 crash.osd-3 osd-3 running (47s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 710d42d9d138 grafana.osd-0 osd-0 *:3000 running (114s) 9s ago 10h 72.9M - 10.4.0-pre f142b583a1b1 3dc5e2248e95 mds.fsvol001.osd-0.qjntcu osd-0 running (99s) 9s ago 10h 17.5M - 18.2.1-229.el9cp 3fd804e38f5b 50efa881c04b mds.fsvol001.osd-2.qneujv osd-2 running (51s) 26s ago 10h 15.3M - 18.2.1-229.el9cp 3fd804e38f5b a306f2d2d676 mgr.osd-0.zukgyq osd-0 *:9283,8765,8443 running (21s) 9s ago 10h 442M - 18.2.1-229.el9cp 3fd804e38f5b 8ef9b728675e mgr.osd-1.jqfyal osd-1 *:8443,9283,8765 running (92s) 26s ago 10h 480M - 18.2.1-229.el9cp 3fd804e38f5b 1ab52db89bfd mon.osd-1 osd-1 running (90s) 26s ago 10h 41.7M 2048M 18.2.1-229.el9cp 3fd804e38f5b 88d1fe1e10ac mon.osd-2 osd-2 running (72s) 26s ago 10h 31.1M 2048M 18.2.1-229.el9cp 3fd804e38f5b 02f57d3bb44f mon.osd-3 osd-3 running (45s) 26s ago 10h 24.0M 2048M 18.2.1-229.el9cp 3fd804e38f5b 5e3783f2b4fa node-exporter.osd-0 osd-0 *:9100 running (15s) 9s ago 10h 7843k - 1.7.0 8c904aa522d0 2dae2127349b node-exporter.osd-1 osd-1 *:9100 running (94s) 26s ago 10h 11.2M - 1.7.0 8c904aa522d0 010c3fcd55cd node-exporter.osd-2 osd-2 *:9100 running (69s) 26s ago 10h 17.2M - 1.7.0 8c904aa522d0 436f2d513f31 node-exporter.osd-3 osd-3 *:9100 running (41s) 26s ago 10h 12.4M - 1.7.0 8c904aa522d0 5579f0d494b8 osd.0 osd-0 running (109s) 9s ago 10h 126M 4096M 18.2.1-229.el9cp 3fd804e38f5b 997076cd39d4 osd.1 osd-1 running (85s) 26s ago 10h 139M 4096M 18.2.1-229.el9cp 3fd804e38f5b 08b720f0587d osd.2 osd-2 running (65s) 26s ago 10h 143M 4096M 18.2.1-229.el9cp 3fd804e38f5b 104ad4227163 osd.3 osd-3 running (36s) 26s ago 10h 94.5M 1435M 18.2.1-229.el9cp 3fd804e38f5b db8b265d9f43 osd.4 osd-0 running (104s) 9s ago 10h 164M 4096M 18.2.1-229.el9cp 3fd804e38f5b 50dcbbf7e012 osd.5 osd-1 running (80s) 26s ago 10h 131M 4096M 18.2.1-229.el9cp 3fd804e38f5b 63b21fe970b5 osd.6 osd-3 running (32s) 26s ago 10h 243M 1435M 18.2.1-229.el9cp 3fd804e38f5b 26c7ba208489 osd.7 osd-2 running (61s) 26s ago 10h 130M 4096M 18.2.1-229.el9cp 3fd804e38f5b 871a2b75e64f prometheus.osd-0 osd-0 *:9095 running (12s) 9s ago 10h 44.6M - 2.48.0 58069186198d e49a064d2478 rgw.rgw.ssl.osd-1.bsmbgd osd-1 *:80 running (78s) 26s ago 10h 75.4M - 18.2.1-229.el9cp 3fd804e38f5b d03c9f7ae4a4",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 12h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: false",
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'",
"oc describe noobaa",
"oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d",
"oc describe pod <image-registry-name>",
"oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>",
"oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>",
"apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]",
"oc get clusterresourcequota -A oc describe clusterresourcequota -A",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'",
"oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py",
"oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]",
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"oc get pvc data-pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s",
"oc get reclaimspacecronjobs -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> -p '{\"spec\": {\"suspend\": true}}' --type=merge",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid",
"Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z",
"apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"",
"delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\"",
"odf subvolume ls --stale",
"Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale",
"odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>",
"odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi",
"Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted",
"oc edit configmap rook-ceph-operator-config -n openshift-storage",
"oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml",
"apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]",
"oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>",
"oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephFilesystems/dataPoolSpec/replicated/size\", \"value\": 2 }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get cephfilesystem ocs-storagecluster-cephfilesystem -o=jsonpath='{.spec.dataPools}' | jq [ { \"application\": \"\", \"deviceClass\": \"ssd\", \"erasureCoded\": { \"codingChunks\": 0, \"dataChunks\": 0 }, \"failureDomain\": \"zone\", \"mirroring\": {}, \"quotas\": {}, \"replicated\": { \"replicasPerFailureDomain\": 1, \"size\": 2, \"targetSizeRatio\": 0.49 }, \"statusCheck\": { \"mirror\": {} } } ]",
"ceph osd pool ls | grep filesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0",
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path",
"odf get recovery-profile",
"odf set recovery-profile <option>",
"odf get recovery-profile",
"odf set full 0.9",
"odf set full 0.92",
"odf set full 0.85",
"odf set backfillfull 0.85",
"odf set nearfull 0.8",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/managing_and_allocating_storage_resources/configuring-access-to-kms-using-vaulttokens_rhodf |
C.3. Glocks | C.3. Glocks To understand GFS2, the most important concept to understand, and the one which sets it aside from other file systems, is the concept of glocks. In terms of the source code, a glock is a data structure that brings together the DLM and caching into a single state machine. Each glock has a 1:1 relationship with a single DLM lock, and provides caching for that lock state so that repetitive operations carried out from a single node of the file system do not have to repeatedly call the DLM, and thus they help avoid unnecessary network traffic. There are two broad categories of glocks, those which cache metadata and those which do not. The inode glocks and the resource group glocks both cache metadata, other types of glocks do not cache metadata. The inode glock is also involved in the caching of data in addition to metadata and has the most complex logic of all glocks. Table C.1. Glock Modes and DLM Lock Modes Glock mode DLM lock mode Notes UN IV/NL Unlocked (no DLM lock associated with glock or NL lock depending on I flag) SH PR Shared (protected read) lock EX EX Exclusive lock DF CW Deferred (concurrent write) used for Direct I/O and file system freeze Glocks remain in memory until either they are unlocked (at the request of another node or at the request of the VM) and there are no local users. At that point they are removed from the glock hash table and freed. When a glock is created, the DLM lock is not associated with the glock immediately. The DLM lock becomes associated with the glock upon the first request to the DLM, and if this request is successful then the 'I' (initial) flag will be set on the glock. Table C.4, "Glock flags" shows the meanings of the different glock flags. Once the DLM has been associated with the glock, the DLM lock will always remain at least at NL (Null) lock mode until the glock is to be freed. A demotion of the DLM lock from NL to unlocked is always the last operation in the life of a glock. Note This particular aspect of DLM lock behavior has changed since Red Hat Enterprise Linux 5, which does sometimes unlock the DLM locks attached to glocks completely, and thus Red Hat Enterprise Linux 5 has a different mechanism to ensure that LVBs (lock value blocks) are preserved where required. The new scheme that Red Hat Enterprise Linux 6 uses was made possible due to the merging of the lock_dlm lock module (not to be confused with the DLM itself) into GFS2. Each glock can have a number of "holders" associated with it, each of which represents one lock request from the higher layers. System calls relating to GFS2 queue and dequeue holders from the glock to protect the critical section of code. The glock state machine is based on a workqueue. For performance reasons, tasklets would be preferable; however, in the current implementation we need to submit I/O from that context which prohibits their use. Note Workqueues have their own tracepoints which can be used in combination with the GFS2 tracepoints if desired Table C.2, "Glock Modes and Data Types" shows what state may be cached under each of the glock modes and whether that cached state may be dirty. This applies to both inode and resource group locks, although there is no data component for the resource group locks, only metadata. Table C.2. Glock Modes and Data Types Glock mode Cache Data Cache Metadata Dirty Data Dirty Metadata UN No No No No SH Yes Yes No No DF No Yes No No EX Yes Yes Yes Yes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ap-glocks-gfs2 |
Chapter 30. VDO Integration | Chapter 30. VDO Integration 30.1. Theoretical Overview of VDO Virtual Data Optimizer (VDO) is a block virtualization technology that allows you to easily create compressed and deduplicated pools of block storage. Deduplication is a technique for reducing the consumption of storage resources by eliminating multiple copies of duplicate blocks. Instead of writing the same data more than once, VDO detects each duplicate block and records it as a reference to the original block. VDO maintains a mapping from logical block addresses, which are used by the storage layer above VDO, to physical block addresses, which are used by the storage layer under VDO. After deduplication, multiple logical block addresses may be mapped to the same physical block address; these are called shared blocks . Block sharing is invisible to users of the storage, who read and write blocks as they would if VDO were not present. When a shared block is overwritten, a new physical block is allocated for storing the new block data to ensure that other logical block addresses that are mapped to the shared physical block are not modified. Compression is a data-reduction technique that works well with file formats that do not necessarily exhibit block-level redundancy, such as log files and databases. See Section 30.4.8, "Using Compression" for more detail. The VDO solution consists of the following components: kvdo A kernel module that loads into the Linux Device Mapper layer to provide a deduplicated, compressed, and thinly provisioned block storage volume uds A kernel module that communicates with the Universal Deduplication Service (UDS) index on the volume and analyzes data for duplicates. Command line tools For configuring and managing optimized storage. 30.1.1. The UDS Kernel Module ( uds ) The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly determines if that piece is identical to any previously stored piece of data. If the index finds match, the storage system can then internally reference the existing item to avoid storing the same information more than once. The UDS index runs inside the kernel as the uds kernel module. 30.1.2. The VDO Kernel Module ( kvdo ) The kvdo Linux kernel module provides block-layer deduplication services within the Linux Device Mapper layer. In the Linux kernel, Device Mapper serves as a generic framework for managing pools of block storage, allowing the insertion of block-processing modules into the storage stack between the kernel's block interface and the actual storage device drivers. The kvdo module is exposed as a block device that can be accessed directly for block storage or presented through one of the many available Linux file systems, such as XFS or ext4. When kvdo receives a request to read a (logical) block of data from a VDO volume, it maps the requested logical block to the underlying physical block and then reads and returns the requested data. When kvdo receives a request to write a block of data to a VDO volume, it first checks whether it is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions holds, kvdo updates its block map and acknowledges the request. Otherwise, a physical block is allocated for use by the request. Overview of VDO Write Policies If the kvdo module is operating in synchronous mode: It temporarily writes the data in the request to the allocated block and then acknowledges the request. Once the acknowledgment is complete, an attempt is made to deduplicate the block by computing a MurmurHash-3 signature of the block data, which is sent to the VDO index. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated block and does a byte-by-byte comparison of the two blocks to verify that they are identical. If they are indeed identical, then kvdo updates its block map so that the logical block points to the corresponding physical block and releases the allocated physical block. If the VDO index did not contain an entry for the signature of the block being written, or the indicated block does not actually contain the same data, kvdo updates its block map to make the temporary physical block permanent. If kvdo is operating in asynchronous mode: Instead of writing the data, it will immediately acknowledge the request. It will then attempt to deduplicate the block in same manner as described above. If the block turns out to be a duplicate, kvdo will update its block map and release the allocated block. Otherwise, it will write the data in the request to the allocated block and update the block map to make the physical block permanent. 30.1.3. VDO Volume VDO uses a block device as a backing store, which can include an aggregation of physical storage consisting of one or more disks, partitions, or even flat files. When a VDO volume is created by a storage management tool, VDO reserves space from the volume for both a UDS index and the VDO volume, which interact together to provide deduplicated block storage to users and applications. Figure 30.1, "VDO Disk Organization" illustrates how these pieces fit together. Figure 30.1. VDO Disk Organization Slabs The physical storage of the VDO volume is divided into a number of slabs , each of which is a contiguous region of the physical space. All of the slabs for a given volume will be of the same size, which may be any power of 2 multiple of 128 MB up to 32 GB. The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO volume may have up to 8192 slabs. Therefore, in the default configuration with 2 GB slabs, the maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical storage is 256 TB. At least one entire slab is reserved by VDO for metadata, and therefore cannot be used for storing user data. Slab size has no effect on the performance of the VDO volume. Table 30.1. Recommended VDO Slab Sizes by Physical Volume Size Physical Volume Size Recommended Slab Size 10-99 GB 1 GB 100 GB - 1 TB 2 GB 2-256 TB 32 GB The size of a slab can be controlled by providing the --vdoSlabSize= megabytes option to the vdo create command. Physical Size and Available Physical Size Both physical size and available physical size describe the amount of disk space on the block device that VDO can utilize: Physical size is the same size as the underlying block device. VDO uses this storage for: User data, which might be deduplicated and compressed VDO metadata, such as the UDS index Available physical size is the portion of the physical size that VDO is able to use for user data. It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the volume into slabs by the given slab size. For examples of how much storage VDO metadata require on block devices of different sizes, see Section 30.2.3, "Examples of VDO System Requirements by Physical Volume Size" . Logical Size If the --vdoLogicalSize option is not specified, the logical volume size defaults to the available physical volume size. Note that, in Figure 30.1, "VDO Disk Organization" , the VDO deduplicated storage target sits completely on top of the block device, meaning the physical size of the VDO volume is the same size as the underlying block device. VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute maximum logical size of 4PB. 30.1.4. Command Line Tools VDO includes the following command line tools for configuration and management: vdo Creates, configures, and controls VDO volumes vdostats Provides utilization and performance statistics | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-integration |
Preface | Preface As a developer or system administrator, you can modify Red Hat Process Automation Manager and KIE Server settings and properties to meet your business needs. You can modify the behavior of the Red Hat Process Automation Manager runtime, the Business Central interface, or the KIE Server. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/pr01 |
Chapter 1. OpenShift Container Platform 4.17 Documentation | Chapter 1. OpenShift Container Platform 4.17 Documentation Welcome to the official OpenShift Container Platform 4.17 documentation, where you can learn about OpenShift Container Platform and start exploring its features. To navigate the OpenShift Container Platform 4.17 documentation, you can use one of the following methods: Use the navigation bar to browse the documentation. Select the task that interests you from Learn more about OpenShift Container Platform . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/about/welcome-index |
Preface | Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 7.1 Release Notes document the major changes, features, and enhancements introduced in the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release. In addition, the Red Hat Enterprise Linux 7.1 Release Notes document the known issues in Red Hat Enterprise Linux 7.1. For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/pref-red_hat_enterprise_linux-7.1_release_notes-preface |
7.8. RHEA-2014:1467 - new packages: java-1.8.0-openjdk | 7.8. RHEA-2014:1467 - new packages: java-1.8.0-openjdk New java-1.8.0-openjdk packages are now available for Red Hat Enterprise Linux 6. java-1.8.0-openjdk packages provide the OpenJDK runtime environment. This enhancement update adds the java-1.8.0-openjdk packages to Red Hat Enterprise Linux 6. (BZ# 1081073 , BZ# 1113078 ) All users who require java-1.8.0-openjdk are advised to install these new packages. All running instances of OpenJDK Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1467 |
Chapter 5. Managing user-owned OAuth access tokens | Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted | [
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/managing-oauth-access-tokens |
Migrating applications to Spring Boot 2.7 | Migrating applications to Spring Boot 2.7 Red Hat support for Spring Boot 2.7 For use with Spring Boot 2.7.18 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/migrating_applications_to_spring_boot_2.7/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.